Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?
Li, Tianjing; Dickersin, Kay
2013-06-01
Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Evaluating statistical validity of research reports: a guide for managers, planners, and researchers
Amanda L. Golbeck
1986-01-01
Inappropriate statistical methods, as well as appropriate methods inappropriately used, can lead to incorrect conclusions in any research report. Incorrect conclusions may also be due to the fact that the research problem is just hard to quantify in a satisfactory way. Publication of a research report does not guarantee that appropriate statistical methods have been...
Avalanche statistics from data with low time resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Avalanche statistics from data with low time resolution
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...
2016-11-22
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
An Analysis of Effects of Variable Factors on Weapon Performance
1993-03-01
ALTERNATIVE ANALYSIS A. CATEGORICAL DATA ANALYSIS Statistical methodology for categorical data analysis traces its roots to the work of Francis Galton in the...choice of statistical tests . This thesis examines an analysis performed by Surface Warfare Development Group (SWDG). The SWDG analysis is shown to be...incorrect due to the misapplication of testing methods. A corrected analysis is presented and recommendations suggested for changes to the testing
NASA Technical Reports Server (NTRS)
Calkins, D. S.
1998-01-01
When the dependent (or response) variable response variable in an experiment has direction and magnitude, one approach that has been used for statistical analysis involves splitting magnitude and direction and applying univariate statistical techniques to the components. However, such treatment of quantities with direction and magnitude is not justifiable mathematically and can lead to incorrect conclusions about relationships among variables and, as a result, to flawed interpretations. This note discusses a problem with that practice and recommends mathematically correct procedures to be used with dependent variables that have direction and magnitude for 1) computation of mean values, 2) statistical contrasts of and confidence intervals for means, and 3) correlation methods.
Dexter, Franklin; Shafer, Steven L
2017-03-01
Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.
Consequences of common data analysis inaccuracies in CNS trauma injury basic research.
Burke, Darlene A; Whittemore, Scott R; Magnuson, David S K
2013-05-15
The development of successful treatments for humans after traumatic brain or spinal cord injuries (TBI and SCI, respectively) requires animal research. This effort can be hampered when promising experimental results cannot be replicated because of incorrect data analysis procedures. To identify and hopefully avoid these errors in future studies, the articles in seven journals with the highest number of basic science central nervous system TBI and SCI animal research studies published in 2010 (N=125 articles) were reviewed for their data analysis procedures. After identifying the most common statistical errors, the implications of those findings were demonstrated by reanalyzing previously published data from our laboratories using the identified inappropriate statistical procedures, then comparing the two sets of results. Overall, 70% of the articles contained at least one type of inappropriate statistical procedure. The highest percentage involved incorrect post hoc t-tests (56.4%), followed by inappropriate parametric statistics (analysis of variance and t-test; 37.6%). Repeated Measures analysis was inappropriately missing in 52.0% of all articles and, among those with behavioral assessments, 58% were analyzed incorrectly. Reanalysis of our published data using the most common inappropriate statistical procedures resulted in a 14.1% average increase in significant effects compared to the original results. Specifically, an increase of 15.5% occurred with Independent t-tests and 11.1% after incorrect post hoc t-tests. Utilizing proper statistical procedures can allow more-definitive conclusions, facilitate replicability of research results, and enable more accurate translation of those results to the clinic.
Verhulst, Brad
2016-01-01
P values have become the scapegoat for a wide variety of problems in science. P values are generally over-emphasized, often incorrectly applied, and in some cases even abused. However, alternative methods of hypothesis testing will likely fall victim to the same criticisms currently leveled at P values if more fundamental changes are not made in the research process. Increasing the general level of statistical literacy and enhancing training in statistical methods provide a potential avenue for identifying, correcting, and preventing erroneous conclusions from entering the academic literature and for improving the general quality of patient care. PMID:28366961
van de Streek, Jacco; Neumann, Marcus A
2010-10-01
This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.
Incorrect likelihood methods were used to infer scaling laws of marine predator search behaviour.
Edwards, Andrew M; Freeman, Mervyn P; Breed, Greg A; Jonsen, Ian D
2012-01-01
Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451:1098; Science 332:1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited "Lévy-walk-like behaviour", close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of "Lévy-like walks" when modelling consequences of fishing and climate change, and caution that any resulting advice to managers of marine ecosystems would be problematic. For reproducibility and future work we provide R source code for all calculations.
Estimating procedure times for surgeries by determining location parameters for the lognormal model.
Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H
2004-05-01
We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.
An adaptive state of charge estimation approach for lithium-ion series-connected battery system
NASA Astrophysics Data System (ADS)
Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael
2018-07-01
Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.
Truth, Damn Truth, and Statistics
ERIC Educational Resources Information Center
Velleman, Paul F.
2008-01-01
Statisticians and Statistics teachers often have to push back against the popular impression that Statistics teaches how to lie with data. Those who believe incorrectly that Statistics is solely a branch of Mathematics (and thus algorithmic), often see the use of judgment in Statistics as evidence that we do indeed manipulate our results. In the…
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
Mars Pathfinder Near-Field Rock Distribution Re-Evaluation
NASA Technical Reports Server (NTRS)
Haldemann, A. F. C.; Golombek, M. P.
2003-01-01
We have completed analysis of a new near-field rock count at the Mars Pathfinder landing site and determined that the previously published rock count suggesting 16% cumulative fractional area (CFA) covered by rocks is incorrect. The earlier value is not so much wrong (our new CFA is 20%), as right for the wrong reason: both the old and the new CFA's are consistent with remote sensing data, however the earlier determination incorrectly calculated rock coverage using apparent width rather than average diameter. Here we present details of the new rock database and the new statistics, as well as the importance of using rock average diameter for rock population statistics. The changes to the near-field data do not affect the far-field rock statistics.
Quality Control Test for Sequence-Phenotype Assignments
Ortiz, Maria Teresa Lara; Rosario, Pablo Benjamín Leon; Luna-Nevarez, Pablo; Gamez, Alba Savin; Martínez-del Campo, Ana; Del Rio, Gabriel
2015-01-01
Relating a gene mutation to a phenotype is a common task in different disciplines such as protein biochemistry. In this endeavour, it is common to find false relationships arising from mutations introduced by cells that may be depurated using a phenotypic assay; yet, such phenotypic assays may introduce additional false relationships arising from experimental errors. Here we introduce the use of high-throughput DNA sequencers and statistical analysis aimed to identify incorrect DNA sequence-phenotype assignments and observed that 10–20% of these false assignments are expected in large screenings aimed to identify critical residues for protein function. We further show that this level of incorrect DNA sequence-phenotype assignments may significantly alter our understanding about the structure-function relationship of proteins. We have made available an implementation of our method at http://bis.ifc.unam.mx/en/software/chispas. PMID:25700273
Metrology Standards for Quantitative Imaging Biomarkers
Obuchowski, Nancy A.; Kessler, Larry G.; Raunig, David L.; Gatsonis, Constantine; Huang, Erich P.; Kondratovich, Marina; McShane, Lisa M.; Reeves, Anthony P.; Barboriak, Daniel P.; Guimaraes, Alexander R.; Wahl, Richard L.
2015-01-01
Although investigators in the imaging community have been active in developing and evaluating quantitative imaging biomarkers (QIBs), the development and implementation of QIBs have been hampered by the inconsistent or incorrect use of terminology or methods for technical performance and statistical concepts. Technical performance is an assessment of how a test performs in reference objects or subjects under controlled conditions. In this article, some of the relevant statistical concepts are reviewed, methods that can be used for evaluating and comparing QIBs are described, and some of the technical performance issues related to imaging biomarkers are discussed. More consistent and correct use of terminology and study design principles will improve clinical research, advance regulatory science, and foster better care for patients who undergo imaging studies. © RSNA, 2015 PMID:26267831
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
Energy considerations in the Community Atmosphere Model (CAM)
Williamson, David L.; Olson, Jerry G.; Hannay, Cécile; ...
2015-06-30
An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less
The (mis)reporting of statistical results in psychology journals.
Bakker, Marjan; Wicherts, Jelte M
2011-09-01
In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.
Verification of sex from harvested sea otters using DNA testing
Scribner, Kim T.; Green, Ben A.; Gorbics, Carol; Bodkin, James L.
2005-01-01
We used molecular genetic methods to determine the sex of 138 sea otters (Enhydra lutris) harvested from 3 regions of Alaska from 1994 to 1997, to assess the accuracy of post‐harvest field‐sexing. We also tested each of a series of factors associated with errors in field‐sexing of sea otters, including male or female bias, age‐class bias, regional bias, and bias associated with hunt characteristics. Blind control results indicated that sex was determined with 100% accuracy using polymerase chain reaction (PCR) amplification using primers that co‐amplify the zinc finger‐Y‐X gene, located on both the mammalian Y‐ and X‐chromosomes, and Testes Determining Factor (TDF), located on the mammalian Y‐chromosome. DNA‐based sexing revealed that 12.3% of the harvested sea otters were incorrectly sexed in the field, with most errors (13 of 17) occurring as males incorrectly reported as females. Thus, female harvest was overestimated. Using logistic regression analysis, we detected no statistical association of incorrect determination of sex in the field with age class, hunt region, or hunt type. The error in field‐sexing appears to be random, at least with respect to the variables evaluated in this study.
Robust Statistical Detection of Power-Law Cross-Correlation.
Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert
2016-06-02
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.
Robust Statistical Detection of Power-Law Cross-Correlation
Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert
2016-01-01
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630
Application of pedagogy reflective in statistical methods course and practicum statistical methods
NASA Astrophysics Data System (ADS)
Julie, Hongki
2017-08-01
Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
Sharp, J L; Gough, K; Pascoe, M C; Drosdowsky, A; Chang, V T; Schofield, P
2018-07-01
The Memorial Symptom Assessment Scale Short Form (MSAS-SF) is a widely used symptom assessment instrument. Patients who self-complete the MSAS-SF have difficulty following the two-part response format, resulting in incorrectly completed responses. We describe modifications to the response format to improve useability, and rational scoring rules for incorrectly completed items. The modified MSAS-SF was completed by 311 women in our Peer and Nurse support Trial to Assist women in Gynaecological Oncology; the PeNTAGOn study. Descriptive statistics were used to summarise completion of the modified MSAS-SF, and provide symptom statistics before and after applying the rational scoring rules. Spearman's correlations with the Functional Assessment for Cancer Therapy-General (FACT-G) and Hospital Anxiety and Depression Scale (HADS) were assessed. Correct completion of the modified MSAS-SF items ranged from 91.5 to 98.7%. The rational scoring rules increased the percentage of useable responses on average 4% across all symptoms. MSAS-SF item statistics were similar with and without the scoring rules. The pattern of correlations with FACT-G and HADS was compatible with prior research. The modified MSAS-SF was useable for self-completion and responses demonstrated validity. The rational scoring rules can minimise loss of data from incorrectly completed responses. Further investigation is recommended.
Incorrect Match Detection Method for Arctic Sea-Ice Reconstruction Using Uav Images
NASA Astrophysics Data System (ADS)
Kim, J.-I.; Kim, H.-C.
2018-05-01
Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817
Basic statistics (the fundamental concepts).
Lim, Eric
2014-12-01
An appreciation and understanding of statistics is import to all practising clinicians, not simply researchers. This is because mathematics is the fundamental basis to which we base clinical decisions, usually with reference to the benefit in relation to risk. Unless a clinician has a basic understanding of statistics, he or she will never be in a position to question healthcare management decisions that have been handed down from generation to generation, will not be able to conduct research effectively nor evaluate the validity of published evidence (usually making an assumption that most published work is either all good or all bad). This article provides a brief introduction to basic statistical methods and illustrates its use in common clinical scenarios. In addition, pitfalls of incorrect usage have been highlighted. However, it is not meant to be a substitute for formal training or consultation with a qualified and experienced medical statistician prior to starting any research project.
Publication Bias ( The "File-Drawer Problem") in Scientific Inference
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; DeVincenzi, Donald (Technical Monitor)
1999-01-01
Publication bias arises whenever the probability that a study is published depends on the statistical significance of its results. This bias, often called the file-drawer effect since the unpublished results are imagined to be tucked away in researchers' file cabinets, is potentially a severe impediment to combining the statistical results of studies collected from the literature. With almost any reasonable quantitative model for publication bias, only a small number of studies lost in the file-drawer will produce a significant bias. This result contradicts the well known Fail Safe File Drawer (FSFD) method for setting limits on the potential harm of publication bias, widely used in social, medical and psychic research. This method incorrectly treats the file drawer as unbiased, and almost always miss-estimates the seriousness of publication bias. A large body of not only psychic research, but medical and social science studies, has mistakenly relied on this method to validate claimed discoveries. Statistical combination can be trusted only if it is known with certainty that all studies that have been carried out are included. Such certainty is virtually impossible to achieve in literature surveys.
NASA Astrophysics Data System (ADS)
Bani-Salameh, Hisham N.
2017-01-01
We started this work with the goal of detecting misconceptions held by our students about force and motion. A total of 341 students participated in this study by taking the force concept inventory (FCI) test both before and after receiving instructions about force or motion. The data from this study were analysed using different statistical techniques with results from frequencies and the dominant incorrect answer reported in this paper. All misconceptions reported in the original paper of the designers of the FCI test (Hestenes et al 1992 Phys. Teach. 30 141-58) were examined and the results are reported. Only pre test results are reported in this paper leaving post data for future work. We used the modified version of the FCI containing 30 questions and therefore used the revised list of misconceptions. Problems with impetus and active force are among the most dominant ones found with the full list reported in this paper.
Living systematic reviews: 3. Statistical methods for updating meta-analyses.
Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian
2017-11-01
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.
Wilcox, Rand; Carlson, Mike; Azen, Stan; Clark, Florence
2013-03-01
Recently, there have been major advances in statistical techniques for assessing central tendency and measures of association. The practical utility of modern methods has been documented extensively in the statistics literature, but they remain underused and relatively unknown in clinical trials. Our objective was to address this issue. STUDY DESIGN AND PURPOSE: The first purpose was to review common problems associated with standard methodologies (low power, lack of control over type I errors, and incorrect assessments of the strength of the association). The second purpose was to summarize some modern methods that can be used to circumvent such problems. The third purpose was to illustrate the practical utility of modern robust methods using data from the Well Elderly 2 randomized controlled trial. In multiple instances, robust methods uncovered differences among groups and associations among variables that were not detected by classic techniques. In particular, the results demonstrated that details of the nature and strength of the association were sometimes overlooked when using ordinary least squares regression and Pearson correlation. Modern robust methods can make a practical difference in detecting and describing differences between groups and associations between variables. Such procedures should be applied more frequently when analyzing trial-based data. Copyright © 2013 Elsevier Inc. All rights reserved.
Best practices for evaluating single nucleotide variant calling methods for microbial genomics
Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.
2015-01-01
Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378
US science academy expands misconduct definition
NASA Astrophysics Data System (ADS)
Gwynne, Peter
2017-06-01
The US National Academy of Sciences (NAS) has updated its misconduct guidelines, reclassifying the misleading use of statistics, failure to retain data and incorrect authorship of papers as “detrimental” rather than merely “questionable”.
Statistical Uncertainty in the Medicare Shared Savings Program
DeLia, Derek; Hoover, Donald; Cantor, Joel C.
2012-01-01
Objective Analyze statistical risks facing CMS and Accountable Care Organizations (ACOs) under the Medicare Shared Savings Program (MSSP). Methods We calculate the probability that shared savings formulas lead to inappropriate payment, payment denial, and/or financial penalties, assuming that ACOs generate real savings in Medicare spending ranging from 0–10%. We also calculate expected payments from CMS to ACOs under these scenarios. Results The probability of an incorrect outcome is heavily dependent on ACO enrollment size. For example, in the MSSP two-sided model, an ACO with 5,000 enrollees that keeps spending constant faces a 0.24 probability of being inappropriately rewarded for savings and a 0.26 probability of paying an undeserved penalty for increased spending. For an ACO with 50,000 enrollees, both of these probabilities of incorrect outcomes are equal to 0.02. The probability of inappropriate payment denial declines as real ACO savings increase. Still, for ACOs with 5,000 patients, the probability of denial is at least 0.15 even when true savings are 5–7%. Depending on ACO size and the real ACO savings rate, expected ACO payments vary from $115,000 to $35.3 million. Discussion Our analysis indicates there may be greater statistical uncertainty in the MSSP than previously recognized. CMS and ACOs will have to consider this uncertainty in their financial, administrative, and care management planning. We also suggest analytic strategies that can be used to refine ACO payment formulas in the longer term to ensure that the MSSP (and other ACO initiatives that will be influenced by it) work as efficiently as possible. PMID:24800155
Registered nurses' medication management of the elderly in aged care facilities.
Lim, L M; Chiu, L H; Dohrmann, J; Tan, K-L
2010-03-01
Data on adverse drug reactions (ADRs) showed a rising trend in the elderly over 65 years using multiple medications. To identify registered nurses' (RNs) knowledge of medication management and ADRs in the elderly in aged care facilities; evaluate an education programme to increase pharmacology knowledge and prevent ADRs in the elderly; and develop a learning programme with a view to extending provision, if successful. This exploratory study used a non-randomized pre- and post-test one group quasi-experimental design without comparators. It comprised a 23-item knowledge-based test questionnaire, one-hour teaching session and a self-directed learning package. The volunteer sample was RNs from residential aged care facilities, involved in medication management. Participants sat a pre-test immediately before the education, and post-test 4 weeks later (same questionnaire). Participants' perceptions obtained. Pre-test sample n = 58, post-test n = 40, attrition rate of 31%. Using Microsoft Excel 2000, descriptive statistical data analysis of overall pre- and post-test incorrect responses showed: pre-test proportion of incorrect responses = 0.40; post-test proportion of incorrect responses = 0.27; Z-test comparing pre- and post-tests scores of incorrect responses = 6.55 and one-sided P-value = 2.8E-11 (P < 0.001). Pre-test showed knowledge deficits in medication management and ADRs in the elderly; post-test showed statistically significant improvement in RNs' knowledge. It highlighted a need for continuing professional education. Further studies are required on a larger sample of RNs in other aged care facilities, and on the clinical impact of education by investigating nursing practice and elderly residents' outcomes.
Computer algorithm for analyzing and processing borehole strainmeter data
Langbein, John O.
2010-01-01
The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.
Validation of a Projection-domain Insertion of Liver Lesions into CT Images
Chen, Baiyu; Ma, Chi; Leng, Shuai; Fidler, Jeff L.; Sheedy, Shannon P.; McCollough, Cynthia H.; Fletcher, Joel G.; Yu, Lifeng
2016-01-01
Rationale and Objectives The aim of this study was to validate a projection-domain lesion-insertion method with observer studies. Materials and Methods A total of 51 proven liver lesions were segmented from computed tomography images, forward projected, and inserted into patient projection data. The images containing inserted and real lesions were then reconstructed and examined in consensus by two radiologists. First, 102 lesions (51 original, 51 inserted) were viewed in a randomized, blinded fashion and scored from 1 (absolutely inserted) to 10 (absolutely real). Statistical tests were performed to compare the scores for inserted and real lesions. Subsequently, a two-alternative-forced-choice test was conducted, with lesions viewed in pairs (real vs. inserted) in a blinded fashion. The radiologists selected the inserted lesion and provided a confidence level of 1 (no confidence) to 5 (completely certain). The number of lesion pairs that were incorrectly classified was calculated. Results The scores for inserted and proven lesions had the same median (8) and similar interquartile ranges (inserted, 5.5–8; real, 6.5–8). The means scores were not significantly different between real and inserted lesions (P value = 0.17). The receiver operating characteristic curve was nearly diagonal, with an area under the curve of 0.58 ± 0.06. For the two-alternative-forced-choice study, the inserted lesions were incorrectly identified in 49% (25 out of 51) of pairs; radiologists were incorrect in 38% (3 out of 8) of pairs even when they felt very confident in identifying the inserted lesion (confidence level ≥4). Conclusions Radiologists could not distinguish between inserted and real lesions, thereby validating the lesion-insertion technique, which may be useful for conducting virtual clinical trials to optimize image quality and radiation dose. PMID:27432267
Efficiency of nuclear and mitochondrial markers recovering and supporting known amniote groups.
Lambret-Frotté, Julia; Perini, Fernando Araújo; de Moraes Russo, Claudia Augusta
2012-01-01
We have analysed the efficiency of all mitochondrial protein coding genes and six nuclear markers (Adora3, Adrb2, Bdnf, Irbp, Rag2 and Vwf) in reconstructing and statistically supporting known amniote groups (murines, rodents, primates, eutherians, metatherians, therians). The efficiencies of maximum likelihood, Bayesian inference, maximum parsimony, neighbor-joining and UPGMA were also evaluated, by assessing the number of correct and incorrect recovered groupings. In addition, we have compared support values using the conservative bootstrap test and the Bayesian posterior probabilities. First, no correlation was observed between gene size and marker efficiency in recovering or supporting correct nodes. As expected, tree-building methods performed similarly, even UPGMA that, in some cases, outperformed other most extensively used methods. Bayesian posterior probabilities tend to show much higher support values than the conservative bootstrap test, for correct and incorrect nodes. Our results also suggest that nuclear markers do not necessarily show a better performance than mitochondrial genes. The so-called dependency among mitochondrial markers was not observed comparing genome performances. Finally, the amniote groups with lowest recovery rates were therians and rodents, despite the morphological support for their monophyletic status. We suggest that, regardless of the tree-building method, a few carefully selected genes are able to unfold a detailed and robust scenario of phylogenetic hypotheses, particularly if taxon sampling is increased.
The use and misuse of statistical methodologies in pharmacology research.
Marino, Michael J
2014-01-01
Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical α<0.05 criteria has hampered research via the publication of incorrect analysis driven by rudimentary statistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.
Verification of road databases using multiple road models
NASA Astrophysics Data System (ADS)
Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian
2017-08-01
In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.
A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less
Heimann, G; Neuhaus, G
1998-03-01
In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.
Negative Marking and the Student Physician–-A Descriptive Study of Nigerian Medical Schools
Ndu, Ikenna Kingsley; Ekwochi, Uchenna; Di Osuorah, Chidiebere; Asinobi, Isaac Nwabueze; Nwaneri, Michael Osita; Uwaezuoke, Samuel Nkachukwu; Amadi, Ogechukwu Franscesca; Okeke, Ifeyinwa Bernadette; Chinawa, Josephat Maduabuchi; Orjioke, Casmir James Ginikanwa
2016-01-01
Background There is considerable debate about the two most commonly used scoring methods, namely, the formula scoring (popularly referred to as negative marking method in our environment) and number right scoring methods. Although the negative marking scoring system attempts to discourage students from guessing in order to increase test reliability and validity, there is the view that it is an excessive and unfair penalty that also increases anxiety. Feedback from students is part of the education process; thus, this study assessed the perception of medical students about negative marking method for multiple choice question (MCQ) examination formats and also the effect of gender and risk-taking behavior on scores obtained with this assessment method. Methods This was a prospective multicenter survey carried out among fifth year medical students in Enugu State University and the University of Nigeria. A structured questionnaire was administered to 175 medical students from the two schools, while a class test was administered to medical students from Enugu State University. Qualitative statistical methods including frequencies, percentages, and chi square were used to analyze categorical variables. Quantitative statistics using analysis of variance was used to analyze continuous variables. Results Inquiry into assessment format revealed that most of the respondents preferred MCQs (65.9%). One hundred and thirty students (74.3%) had an unfavorable perception of negative marking. Thirty-nine students (22.3%) agreed that negative marking reduces the tendency to guess and increases the validity of MCQs examination format in testing knowledge content of a subject compared to 108 (61.3%) who disagreed with this assertion (χ2 = 23.0, df = 1, P = 0.000). The median score of the students who were not graded with negative marking was significantly higher than the score of the students graded with negative marking (P = 0.001). There was no statistically significant difference in the risk-taking behavior between male and female students in their MCQ answering patterns with negative marking method (P = 0.618). Conclusions In the assessment of students, it is more desirable to adopt fair penalties for discouraging guessing rather than excessive penalties for incorrect answers, which could intimidate students in negative marking schemes. There is no consensus on the penalty for an incorrect answer. Thus, there is a need for continued research into an effective and objective assessment tool that will ensure that the students’ final score in a test truly represents their level of knowledge. PMID:29349304
Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min
2017-01-01
Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821
Kuchenbecker, J; Blum, M; Paul, F
2016-03-01
In acute unilateral optic neuritis (ON) color vision defects combined with a decrease in visual acuity and contrast sensitivity frequently occur. This study investigated whether a web-based color vision test is a reliable detector of acquired color vision defects in ON and, if so, which charts are particularly suitable. In 12 patients with acute unilateral ON, a web-based color vision test ( www.farbsehtest.de ) with 25 color plates (16 Velhagen/Broschmann and 9 Ishihara color plates) was performed. For each patient the affected eye was tested first and then the unaffected eye. The mean best-corrected distance visual acuity (BCDVA) in the ON eye was 0.36 ± 0.20 and 1.0 ± 0.1 in the contralateral eye. The number of incorrectly read plates correlated with the visual acuity. For the ON eye a total of 134 plates were correctly identified and 166 plates were incorrectly identified, while for the disease-free fellow eye, 276 plates were correctly identified and 24 plates were incorrectly identified. Both of the blue/yellow plates were identified correctly 14 times and incorrectly 10 times using the ON eye and exclusively correctly (24 times) using the fellow eye. The Velhagen/Broschmann plates were incorrectly identified significantly more frequently in comparison with the Ishihara plates. In 4 out of 16 Velhagen/Broschmann plates and 5 out of 9 Ishihara plates, no statistically significant differences between the ON eye and the fellow eye could be detected. The number of incorrectly identified plates correlated with a decrease in visual acuity. Red/green and blue/yellow plates were incorrectly identified significantly more frequently with the ON eye, while the Velhagen/Broschmann color plates were incorrectly identified significantly more frequently than the Ishihara color plates. Thus, under defined test conditions the web-based color vision test can also be used to detect acquired color vision defects, such as those caused by ON. Optimization of the test by altering the combination of plates may be a useful next step.
Pitfalls of national routine death statistics for maternal mortality study.
Saucedo, Monica; Bouvier-Colle, Marie-Hélène; Chantry, Anne A; Lamarche-Vadel, Agathe; Rey, Grégoire; Deneux-Tharaux, Catherine
2014-11-01
The lessons learned from the study of maternal deaths depend on the accuracy of data. Our objective was to assess time trends in the underestimation of maternal mortality (MM) in the national routine death statistics in France and to evaluate their current accuracy for the selection and causes of maternal deaths. National data obtained by enhanced methods in 1989, 1999, and 2007-09 were used as the gold standard to assess time trends in the underestimation of MM ratios (MMRs) in death statistics. Enhanced data and death statistics for 2007-09 were further compared by characterising false negatives (FNs) and false positives (FPs). The distribution of cause-specific MMRs, as assessed by each system, was described. Underestimation of MM in death statistics decreased from 55.6% in 1989 to 11.4% in 2007-09 (P < 0.001). In 2007-09, of 787 pregnancy-associated deaths, 254 were classified as maternal by the enhanced system and 211 by the death statistics; 34% of maternal deaths in the enhanced system were FNs in the death statistics, and 20% of maternal deaths in the death statistics were FPs. The hierarchy of causes of MM differed between the two systems. The discordances were mainly explained by the lack of precision in the drafting of death certificates by clinicians. Although the underestimation of MM in routine death statistics has decreased substantially over time, one third of maternal deaths remain unidentified, and the main causes of death are incorrectly identified in these data. Defining relevant priorities in maternal health requires the use of enhanced methods for MM study. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Heitzig, Jobst; Kornek, Ulrike
2018-06-01
In the PDF version of this Article originally published, in equation (6) gi' was incorrectly formatted as gi', and at the end of the Methods section wi was incorrectly formatted as wi. These have now been corrected.
Accurate mass measurement: terminology and treatment of data.
Brenton, A Gareth; Godfrey, A Ruth
2010-11-01
High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.
A statistical framework for protein quantitation in bottom-up MS-based proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less
Modeling Incorrect Responses to Multiple-Choice Items with Multilinear Formula Score Theory.
ERIC Educational Resources Information Center
Drasgow, Fritz; And Others
This paper addresses the information revealed in incorrect option selection on multiple choice items. Multilinear Formula Scoring (MFS), a theory providing methods for solving psychological measurement problems of long standing, is first used to estimate option characteristic curves for the Armed Services Vocational Aptitude Battery Arithmetic…
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
The problem of pseudoreplication in neuroscientific studies: is it affecting your analysis?
2010-01-01
Background Pseudoreplication occurs when observations are not statistically independent, but treated as if they are. This can occur when there are multiple observations on the same subjects, when samples are nested or hierarchically organised, or when measurements are correlated in time or space. Analysis of such data without taking these dependencies into account can lead to meaningless results, and examples can easily be found in the neuroscience literature. Results A single issue of Nature Neuroscience provided a number of examples and is used as a case study to highlight how pseudoreplication arises in neuroscientific studies, why the analyses in these papers are incorrect, and appropriate analytical methods are provided. 12% of papers had pseudoreplication and a further 36% were suspected of having pseudoreplication, but it was not possible to determine for certain because insufficient information was provided. Conclusions Pseudoreplication can undermine the conclusions of a statistical analysis, and it would be easier to detect if the sample size, degrees of freedom, the test statistic, and precise p-values are reported. This information should be a requirement for all publications. PMID:20074371
In Flight Calibration of the Magnetospheric Multiscale Mission Fast Plasma Investigation
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Rager, Amy C.; Schiff, Conrad; Pollock, Craig J.
2015-01-01
The Fast Plasma Investigation (FPI) on the Magnetospheric Multiscale mission (MMS) combines data from eight spectrometers, each with four deflection states, into a single map of the sky. Any systematic discontinuity, artifact, noise source, etc. present in this map may be incorrectly interpreted as legitimate data and incorrect conclusions reached. For this reason it is desirable to have all spectrometers return the same output for a given input, and for this output to be low in noise sources or other errors. While many missions use statistical analyses of data to calibrate instruments in flight, this process is insufficient with FPI for two reasons: 1. Only a small fraction of high resolution data is downloaded to the ground due to bandwidth limitations and 2: The data that is downloaded is, by definition, scientifically interesting and therefore not ideal for calibration. FPI uses a suite of new tools to calibrate in flight. A new method for detection system ground calibration has been developed involving sweeping the detection threshold to fully define the pulse height distribution. This method has now been extended for use in flight as a means to calibrate MCP voltage and threshold (together forming the operating point) of the Dual Electron Spectrometers (DES) and Dual Ion Spectrometers (DIS). A method of comparing higher energy data (which has low fractional voltage error) to lower energy data (which has a higher fractional voltage error) will be used to calibrate the high voltage outputs. Finally, a comparison of pitch angle distributions will be used to find remaining discrepancies among sensors.
Austin, Peter C; Goldwasser, Meredith A
2008-03-01
We examined the impact on statistical inference when a chi(2) test is used to compare the proportion of successes in the level of a categorical variable that has the highest observed proportion of successes with the proportion of successes in all other levels of the categorical variable combined. Monte Carlo simulations and a case study examining the association between astrological sign and hospitalization for heart failure. A standard chi(2) test results in an inflation of the type I error rate, with the type I error rate increasing as the number of levels of the categorical variable increases. Using a standard chi(2) test, the hospitalization rate for Pisces was statistically significantly different from that of the other 11 astrological signs combined (P=0.026). After accounting for the fact that the selection of Pisces was based on it having the highest observed proportion of heart failure hospitalizations, subjects born under the sign of Pisces no longer had a significantly higher rate of heart failure hospitalization compared to the other residents of Ontario (P=0.152). Post hoc comparisons of the proportions of successes across different levels of a categorical variable can result in incorrect inferences.
Mignot, A; Ferrari, R; Claustre, H
2018-05-04
In the original version of this Article, the data accession https://doi.org/10.17882/42182 was omitted from the Data Availability statement.In the first paragraph of the Methods subsection entitled 'Float data processing', the WET Labs ECO-triplet fluorometer was incorrectly referred to as 'WETLabs ECO PUK'. In the final paragraph of this subsection, the WET Labs ECO-series fluorometer was incorrectly referred to as 'WETLabs 413 ECO-series'.In the Methods subsection 'Float estimates of phytoplankton carbon biomass', the average particulate organic carbon-bbp ratio of 37,537 mgC m -2 was incorrectly given as 37,357 mgC m -2 .In the second paragraph of the Methods subsection 'Float estimates of population division rates', the symbol for Celsius (C) was omitted from the phrase 'a 10°C increase in temperature'.These errors have now been corrected in the PDF and HTML versions of the Article.
Toward Improving Research in Social Studies Education. SSEC Monograph Series.
ERIC Educational Resources Information Center
Fraenkel, Jack R.; Wallen, Norman E.
Social studies research has been criticized for sampling bias, inappropriate methodologies, incorrect or inappropriate use of statistics, weak or ill-defined treatments, and lack of replication and/or longitudinal follow-up. In an effort to ascertain whether past criticisms were true of current research as well, a review was conducted of 118…
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Appropriate Statistics for Determining Chance-Removed Interpractitioner Agreement.
Popplewell, Michael; Reizes, John; Zaslawski, Chris
2018-05-31
Fleiss' Kappa (FK) has been commonly, but incorrectly, employed as the "standard" for evaluating chance-removed inter-rater agreement with ordinal data. This practice may lead to misleading conclusions in inter-rater agreement research. An example is presented that demonstrates the conditions where FK produces inappropriate results, compared with Gwet's AC2, which is proposed as a more appropriate statistic. A novel format for recording a Chinese Medical (CM) diagnoses, called the Diagnostic System of Oriental Medicine (DSOM), was used to record and compare patient diagnostic data, which, unlike the contemporary CM diagnostic format, allows agreement by chance to be considered when evaluating patient data obtained with unrestricted diagnostic options available to diagnosticians. Five CM practitioners diagnosed 42 subjects drawn from an open population. Subjects' diagnoses were recorded using the DSOM format. All the available data were initially used to evaluate agreement. Then, the subjects were sorted into three groups to demonstrate the effects of differing data marginality on the calculated chance-removed agreement. Agreement between the practitioners for each subject was evaluated with linearly weighted simple agreement, FK and Gwet's AC2. In all cases, overall agreement was much lower with FK than Gwet's AC2. Larger differences occurred when the data were more free marginal. Inter-rater agreement determined with FK statistics is unlikely to be correct unless it can be shown that the data from which agreement is determined are, in fact, fixed marginal. It follows that results obtained on agreement between practitioners with FK are probably incorrect. It is shown that inter-rater agreement evaluated with AC2 statistic is an appropriate measure when fixed marginal data are neither expected nor guaranteed. The AC2 statistic should be used as the standard statistical approach for determining agreement between practitioners.
Rotary pin-in-maze discriminator
Benavides, Gilbert L.
1997-01-01
A discriminator apparatus and method that discriminates between a unique signal and any other (incorrect) signal. The unique signal is a sequence of events; each event can assume one of two possible event states. Given the unique signal, a maze wheel is allowed to rotate fully in one direction. Given an incorrect signal, both the maze wheel and a pin wheel lock in position.
A method for identifying color vision deficiency malingering.
Pouw, Andrew; Karanjia, Rustum; Sadun, Alfredo
2017-03-01
To propose a new test to identify color vision deficiency malingering. An online survey was distributed to 130 truly color vision deficient participants and 160 participants willing to simulate color vision deficiency. The survey contained three sets of six color-adjusted versions of the standard Ishihara color plates each, as well as one set of six control plates. The plates that best discriminated both participant groups were selected for a "balanced" test emphasizing both sensitivity and specificity. A "specific" test that prioritized high specificity was also created by selecting from these plates. Statistical measures of the test (sensitivity, specificity, and Youden index) were assessed at each possible cut-off threshold, and a receiver operating characteristic (ROC) function with its area under the curve (AUC) charted. The redshift plate set was identified as having the highest difference of means between groups (-58%, CI: -64 to -52%), as well as the widest gap between group modes. Statistical measures of the "balanced" test show an optimal cut-off of at least two incorrectly identified plates to suggest malingering (Youden index: 0.773, sensitivity: 83.3%, specificity: 94.0%, AUC of ROC 0.918). The "specific" test was able to identify color vision deficiency simulators with a specificity of 100% when using a cut-off of at least two incorrectly identified plates (Youden index 0.599, sensitivity 59.9%, specificity 100%, AUC of ROC 0.881). Our proposed test for identifying color vision deficiency malingering demonstrates a high degree of reliability with AUCs of 0.918 and 0.881 for the "balanced" and "specific" tests, respectively. A cut-off threshold of at least two missed plates on the "specific" test was able to identify color vision deficiency simulators with 100% specificity.
Mager, P P; Rothe, H
1990-10-01
Multicollinearity of physicochemical descriptors leads to serious consequences in quantitative structure-activity relationship (QSAR) analysis, such as incorrect estimators and test statistics of regression coefficients of the ordinary least-squares (OLS) model applied usually to QSARs. Beside the diagnosis of the known simple collinearity, principal component regression analysis (PCRA) also allows the diagnosis of various types of multicollinearity. Only if the absolute values of PCRA estimators are order statistics that decrease monotonically, the effects of multicollinearity can be circumvented. Otherwise, obscure phenomena may be observed, such as good data recognition but low predictive model power of a QSAR model.
Rotary pin-in-maze discriminator
Benavides, G.L.
1997-05-06
A discriminator apparatus and method that discriminates between a unique signal and any other (incorrect) signal are disclosed. The unique signal is a sequence of events; each event can assume one of two possible event states. Given the unique signal, a maze wheel is allowed to rotate fully in one direction. Given an incorrect signal, both the maze wheel and a pin wheel lock in position. 4 figs.
On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai
2007-01-01
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…
Vapor Pressure Data Analysis and Statistics
2016-12-01
sublimation for solids), volatility, and entropy of volatilization. Vapor pressure can be reported several different ways, including tables of experimental ...account the variation in heat of vaporization with temperature, and accurately describes data over broad experimental ranges, thereby enabling...pressure is incorrect at temperatures far below the experimental temperature limit; the calculated vapor pressure becomes undefined when the
Exploiting Lexical Ambiguity to Help Students Understand the Meaning of "Random"
ERIC Educational Resources Information Center
Kaplan, Jennifer J.; Rogness, Neal T.; Fisher, Diane G.
2014-01-01
Words that are part of colloquial English but used differently in a technical domain may possess lexical ambiguity. The use of such words by instructors may inhibit student learning if incorrect connections are made by students between the technical and colloquial meanings. One fundamental word in statistics that has lexical ambiguity for students…
Mericske-Stern, R
1994-01-01
The capacity of dentate subjects to discriminate the thickness of objects placed between the teeth seems to depend on receptors in the periodontal ligament and muscles. The compensatory mechanism of ankylotic implants for the function of missing periodontal ligaments is not yet known. To investigate this question in overdenture wearers, 26 patients with ITI implants and 20 patients with natural roots were selected. According to the experimental protocol, the discriminatory ability was recorded with 10 steel foils (thickness ranging from 10 to 100 microns) placed between the premolars. Each thickness was tested 10 times and the test subjects were required to distinguish whether foil was positioned between the teeth. A maximum of 100 correct or 100 incorrect answers was possible. The average number of incorrect answers was significantly higher in test subjects with implants. The 50% limit (ie, the tested thickness recorded with at least 5 wrong answers) was established, but no statistically significant difference was found. In both groups, the critical tactile threshold of perceived thickness was 30 to 40 microns, with 2 being the average number of incorrect assessments. When comparing the minimal thickness, which was recorded without incorrect assessment, a significantly lower threshold was observed on patients with natural roots. Thus, active tactile sensibility appears to depend on the receptors in the periodontal ligament. However, wearing of removable prostheses is a modifying factor and may influence the oral tactile sensibility for both groups.
Mothers' physical interventions in toddler play in a low-income, African American sample.
Ispa, Jean M; Claire Cook, J; Harmeyer, Erin; Rudy, Duane
2015-11-01
This mixed method study examined 28 low-income African American mothers' physical interventions in their 14-month-old toddlers' play. Inductive methods were used to identify six physical intervention behaviors, the affect accompanying physical interventions, and apparent reasons for intervening. Nonparametric statistical analyses determined that toddlers experienced physical intervention largely in the context of positive maternal affect. Mothers of boys expressed highly positive affect while physically intervening more than mothers of girls. Most physically intervening acts seemed to be motivated by maternal intent to show or tell children how to play or to correct play deemed incorrect. Neutral affect was the most common toddler affect type following physical intervention, but boys were more likely than girls to be upset immediately after physical interventions. Physical interventions intended to protect health and safety seemed the least likely to elicit toddler upset. Copyright © 2015 Elsevier Inc. All rights reserved.
WASP (Write a Scientific Paper) using Excel - 7: The t-distribution.
Grech, Victor
2018-03-01
The calculation of descriptive statistics after data collection provides researchers with an overview of the shape and nature of their datasets, along with basic descriptors, and may help identify true or incorrect outlier values. This exercise should always precede inferential statistics, when possible. This paper provides some pointers for doing so in Microsoft Excel, both statically and dynamically, with Excel's functions, including the calculation of standard deviation and variance and the relevance of the t-distribution. Copyright © 2018 Elsevier B.V. All rights reserved.
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
The Essential Genome of Escherichia coli K-12
2018-01-01
ABSTRACT Transposon-directed insertion site sequencing (TraDIS) is a high-throughput method coupling transposon mutagenesis with short-fragment DNA sequencing. It is commonly used to identify essential genes. Single gene deletion libraries are considered the gold standard for identifying essential genes. Currently, the TraDIS method has not been benchmarked against such libraries, and therefore, it remains unclear whether the two methodologies are comparable. To address this, a high-density transposon library was constructed in Escherichia coli K-12. Essential genes predicted from sequencing of this library were compared to existing essential gene databases. To decrease false-positive identification of essential genes, statistical data analysis included corrections for both gene length and genome length. Through this analysis, new essential genes and genes previously incorrectly designated essential were identified. We show that manual analysis of TraDIS data reveals novel features that would not have been detected by statistical analysis alone. Examples include short essential regions within genes, orientation-dependent effects, and fine-resolution identification of genome and protein features. Recognition of these insertion profiles in transposon mutagenesis data sets will assist genome annotation of less well characterized genomes and provides new insights into bacterial physiology and biochemistry. PMID:29463657
Statistical analysis and interpolation of compositional data in materials science.
Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M
2015-02-09
Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.
NASA Astrophysics Data System (ADS)
Chung, Moo K.; Kim, Seung-Goo; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matthew J.; Davidson, Richard J.
2014-03-01
The sparse regression framework has been widely used in medical image processing and analysis. However, it has been rarely used in anatomical studies. We present a sparse shape modeling framework using the Laplace- Beltrami (LB) eigenfunctions of the underlying shape and show its improvement of statistical power. Tradition- ally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes as a form of Fourier descriptors. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we present a LB-based method to filter out only the significant eigenfunctions by imposing a sparse penalty. For dense anatomical data such as deformation fields on a surface mesh, the sparse regression behaves like a smoothing process, which will reduce the error of incorrectly detecting false negatives. Hence the statistical power improves. The sparse shape model is then applied in investigating the influence of age on amygdala and hippocampus shapes in the normal population. The advantage of the LB sparse framework is demonstrated by showing the increased statistical power.
XenoSite: accurately predicting CYP-mediated sites of metabolism with neural networks.
Zaretzki, Jed; Matlock, Matthew; Swamidass, S Joshua
2013-12-23
Understanding how xenobiotic molecules are metabolized is important because it influences the safety, efficacy, and dose of medicines and how they can be modified to improve these properties. The cytochrome P450s (CYPs) are proteins responsible for metabolizing 90% of drugs on the market, and many computational methods can predict which atomic sites of a molecule--sites of metabolism (SOMs)--are modified during CYP-mediated metabolism. This study improves on prior methods of predicting CYP-mediated SOMs by using new descriptors and machine learning based on neural networks. The new method, XenoSite, is faster to train and more accurate by as much as 4% or 5% for some isozymes. Furthermore, some "incorrect" predictions made by XenoSite were subsequently validated as correct predictions by revaluation of the source literature. Moreover, XenoSite output is interpretable as a probability, which reflects both the confidence of the model that a particular atom is metabolized and the statistical likelihood that its prediction for that atom is correct.
Bonn, Bernadine A.
2008-01-01
A long-term method detection level (LT-MDL) and laboratory reporting level (LRL) are used by the U.S. Geological Survey?s National Water Quality Laboratory (NWQL) when reporting results from most chemical analyses of water samples. Changing to this method provided data users with additional information about their data and often resulted in more reported values in the low concentration range. Before this method was implemented, many of these values would have been censored. The use of the LT-MDL and LRL presents some challenges for the data user. Interpreting data in the low concentration range increases the need for adequate quality assurance because even small contamination or recovery problems can be relatively large compared to concentrations near the LT-MDL and LRL. In addition, the definition of the LT-MDL, as well as the inclusion of low values, can result in complex data sets with multiple censoring levels and reported values that are less than a censoring level. Improper interpretation or statistical manipulation of low-range results in these data sets can result in bias and incorrect conclusions. This document is designed to help data users use and interpret data reported with the LTMDL/ LRL method. The calculation and application of the LT-MDL and LRL are described. This document shows how to extract statistical information from the LT-MDL and LRL and how to use that information in USGS investigations, such as assessing the quality of field data, interpreting field data, and planning data collection for new projects. A set of 19 detailed examples are included in this document to help data users think about their data and properly interpret lowrange data without introducing bias. Although this document is not meant to be a comprehensive resource of statistical methods, several useful methods of analyzing censored data are demonstrated, including Regression on Order Statistics and Kaplan-Meier Estimation. These two statistical methods handle complex censored data sets without resorting to substitution, thereby avoiding a common source of bias and inaccuracy.
Statistical Approaches to Adjusting Weights for Dependent Arms in Network Meta-analysis.
Su, Yu-Xuan; Tu, Yu-Kang
2018-05-22
Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only one treatment. However, some trials use within person designs such as split-body, split-mouth and cross-over designs, where each patient may receive more than one treatment. Data from treatment arms within these trials are no longer independent, so the correlations between dependent arms need to be accounted for within the statistical analyses. Ignoring these correlations may result in incorrect conclusions. The main objective of this study is to develop statistical approaches to adjusting weights for dependent arms within special design trials. In this study, we demonstrate the following three approaches: the data augmentation approach, the adjusting variance approach, and the reducing weight approach. These three methods could be perfectly applied in current statistic tools such as R and STATA. An example of periodontal regeneration was used to demonstrate how these approaches could be undertaken and implemented within statistical software packages, and to compare results from different approaches. The adjusting variance approach can be implemented within the network package in STATA, while reducing weight approach requires computer software programming to set up the within-study variance-covariance matrix. This article is protected by copyright. All rights reserved.
Thermal transmission of camouflage nets revisited
NASA Astrophysics Data System (ADS)
Jersblad, Johan; Jacobs, Pieter
2016-10-01
In this article we derive, from first principles, the correct formula for thermal transmission of a camouflage net, based on the setup described in the US standard for lightweight camouflage nets. Furthermore, we compare the results and implications with the use of an incorrect formula that have been seen in several recent tenders. It is shown that the incorrect formulation not only gives rise to large errors, but the result also depends on the surrounding room temperature, which in the correct derivation cancels out. The theoretical results are compared with laboratory measurements. The theoretical results agree with the laboratory results for the correct derivation. To summarize we discuss the consequences for soldiers on the battlefield if incorrect standards and test methods are used in procurement processes.
Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial
Hallgren, Kevin A.
2012-01-01
Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776
van Rooij, Antonius J; Van Looy, Jan; Billieux, Joël
2017-07-01
Some people have serious problems controlling their Internet and video game use. The DSM-5 now includes a proposal for 'Internet Gaming Disorder' (IGD) as a condition in need of further study. Various studies aim to validate the proposed diagnostic criteria for IGD and multiple new scales have been introduced that cover the suggested criteria. Using a structured approach, we demonstrate that IGD might be better interpreted as a formative construct, as opposed to the current practice of conceptualizing it as a reflective construct. Incorrectly approaching a formative construct as a reflective one causes serious problems in scale development, including: (i) incorrect reliance on item-to-total scale correlation to exclude items and incorrectly relying on indices of inter-item reliability that do not fit the measurement model (e.g., Cronbach's α); (ii) incorrect interpretation of composite or mean scores that assume all items are equal in contributing value to a sum score; and (iii) biased estimation of model parameters in statistical models. We show that these issues are impacting current validation efforts through two recent examples. A reinterpretation of IGD as a formative construct has broad consequences for current validation efforts and provides opportunities to reanalyze existing data. We discuss three broad implications for current research: (i) composite latent constructs should be defined and used in models; (ii) item exclusion and selection should not rely on item-to-total scale correlations; and (iii) existing definitions of IGD should be enriched further. © 2016 The Authors. Psychiatry and Clinical Neurosciences © 2016 Japanese Society of Psychiatry and Neurology.
Nemčić-Jurec, Jasna; Konjačić, Miljenko; Jazbec, Anamarija
2013-11-01
Nitrates are the most common chemical pollutant of groundwater in agricultural and suburban areas. Croatia must comply with the Nitrate Directive (91/676/EEC) whose aim is to reduce water pollution by nitrates originating from agriculture and to prevent further pollution. Podravina and Prigorje are the areas with a relatively high degree of agricultural activity. Therefore, the aim of this study was, by monitoring nitrates, to determine the distribution of nitrates in two different areas, Podravina and Prigorje (Croatia), to determine sources of contamination as well as annual and seasonal trends. The nitrate concentrations were measured in 30 wells (N = 382 samples) in Prigorje and in 19 wells (N = 174 samples) in Podravina from 2002 to 2007. In Podravina, the nitrate content was 24.9 mg/l and 6% of the samples were above the maximum available value (MAV), and in Prigorje the content was 53.9 mg/l and 38% of the samples above MAV. The wells were classified as correct, occasionally incorrect and incorrect. In the group of occasionally incorrect and incorrect wells, the point sources were within 10 m of the well. There is no statistically significant difference over the years or seasons within the year, but the interaction between locations and years was significant. Nitrate concentrations' trend was not significant during the monitoring. These results are a prerequisite for the adjustment of Croatian standards to those of the EU and will contribute to the implementation of the Nitrate Directive and the Directives on Environmental Protection in Croatia and the EU.
Wullschleger, Marcel; Aghlmandi, Soheila; Egger, Marcel; Zwahlen, Marcel
2014-01-01
In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.
WASP (Write a Scientific Paper) using Excel - 3: Plotting data.
Grech, Victor
2018-02-01
The plotting of data into graphs should be a mandatory step in all data analysis as part of a descriptive statistics exercise, since it gives the researcher an overview of the shape and nature of the data. Moreover, outlier values may be identified, which may be incorrect data, or true outliers, from which important findings (and publications) may arise. This exercise should always precede inferential statistics, when possible, and this paper in the Early Human Development WASP series provides some pointers for doing so in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
Examining the Reproducibility of 6 Published Studies in Public Health Services and Systems Research.
Harris, Jenine K; B Wondmeneh, Sarah; Zhao, Yiqiang; Leider, Jonathon P
2018-02-23
Research replication, or repeating a study de novo, is the scientific standard for building evidence and identifying spurious results. While replication is ideal, it is often expensive and time consuming. Reproducibility, or reanalysis of data to verify published findings, is one proposed minimum alternative standard. While a lack of research reproducibility has been identified as a serious and prevalent problem in biomedical research and a few other fields, little work has been done to examine the reproducibility of public health research. We examined reproducibility in 6 studies from the public health services and systems research subfield of public health research. Following the methods described in each of the 6 papers, we computed the descriptive and inferential statistics for each study. We compared our results with the original study results and examined the percentage differences in descriptive statistics and differences in effect size, significance, and precision of inferential statistics. All project work was completed in 2017. We found consistency between original and reproduced results for each paper in at least 1 of the 4 areas examined. However, we also found some inconsistency. We identified incorrect transcription of results and omitting detail about data management and analyses as the primary contributors to the inconsistencies. Increasing reproducibility, or reanalysis of data to verify published results, can improve the quality of science. Researchers, journals, employers, and funders can all play a role in improving the reproducibility of science through several strategies including publishing data and statistical code, using guidelines to write clear and complete methods sections, conducting reproducibility reviews, and incentivizing reproducible science.
Modular structure of functional networks in olfactory memory.
Meunier, David; Fonlupt, Pierre; Saive, Anne-Lise; Plailly, Jane; Ravel, Nadine; Royet, Jean-Pierre
2014-07-15
Graph theory enables the study of systems by describing those systems as a set of nodes and edges. Graph theory has been widely applied to characterize the overall structure of data sets in the social, technological, and biological sciences, including neuroscience. Modular structure decomposition enables the definition of sub-networks whose components are gathered in the same module and work together closely, while working weakly with components from other modules. This processing is of interest for studying memory, a cognitive process that is widely distributed. We propose a new method to identify modular structure in task-related functional magnetic resonance imaging (fMRI) networks. The modular structure was obtained directly from correlation coefficients and thus retained information about both signs and weights. The method was applied to functional data acquired during a yes-no odor recognition memory task performed by young and elderly adults. Four response categories were explored: correct (Hit) and incorrect (False alarm, FA) recognition and correct and incorrect rejection. We extracted time series data for 36 areas as a function of response categories and age groups and calculated condition-based weighted correlation matrices. Overall, condition-based modular partitions were more homogeneous in young than elderly subjects. Using partition similarity-based statistics and a posteriori statistical analyses, we demonstrated that several areas, including the hippocampus, caudate nucleus, and anterior cingulate gyrus, belonged to the same module more frequently during Hit than during all other conditions. Modularity values were negatively correlated with memory scores in the Hit condition and positively correlated with bias scores (liberal/conservative attitude) in the Hit and FA conditions. We further demonstrated that the proportion of positive and negative links between areas of different modules (i.e., the proportion of correlated and anti-correlated areas) accounted for most of the observed differences in signed modularity. Taken together, our results provided some evidence that the neural networks involved in odor recognition memory are organized into modules and that these modular partitions are linked to behavioral performance and individual strategies. Copyright © 2014 Elsevier Inc. All rights reserved.
Stochastic Dynamics of Lexicon Learning in an Uncertain and Nonuniform World
NASA Astrophysics Data System (ADS)
Reisenauer, Rainer; Smith, Kenny; Blythe, Richard A.
2013-06-01
We study the time taken by a language learner to correctly identify the meaning of all words in a lexicon under conditions where many plausible meanings can be inferred whenever a word is uttered. We show that the most basic form of cross-situational learning—whereby information from multiple episodes is combined to eliminate incorrect meanings—can perform badly when words are learned independently and meanings are drawn from a nonuniform distribution. If learners further assume that no two words share a common meaning, we find a phase transition between a maximally efficient learning regime, where the learning time is reduced to the shortest it can possibly be, and a partially efficient regime where incorrect candidate meanings for words persist at late times. We obtain exact results for the word-learning process through an equivalence to a statistical mechanical problem of enumerating loops in the space of word-meaning mappings.
Anomaly detection for machine learning redshifts applied to SDSS galaxies
NASA Astrophysics Data System (ADS)
Hoyle, Ben; Rau, Markus Michael; Paech, Kerstin; Bonnett, Christopher; Seitz, Stella; Weller, Jochen
2015-10-01
We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million `clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 `anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed `anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80 per cent when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.
Nelson, Douglas L; Dyrdal, Gunvor M; Goodmon, Leilani B
2005-08-01
Measuring lexical knowledge poses a challenge to the study of the influence of preexisting knowledge on the retrieval of new memories. Many tasks focus on word pairs, but words are embedded in associative networks, so how should preexisting pair strength be measured? It has been measured by free association, similarity ratings, and co-occurrence statistics. Researchers interpret free association response probabilities as unbiased estimates of forward cue-to-target strength. In Study 1, analyses of large free association and extralist cued recall databases indicate that this interpretation is incorrect. Competitor and backward strengths bias free association probabilities, and as with other recall tasks, preexisting strength is described by a ratio rule. In Study 2, associative similarity ratings are predicted by forward and backward, but not by competitor, strength. Preexisting strength is not a unitary construct, because its measurement varies with method. Furthermore, free association probabilities predict extralist cued recall better than do ratings and co-occurrence statistics. The measure that most closely matches the criterion task may provide the best estimate of the identity of preexisting strength.
Bang, Lia Evi; Wiinberg, Niels
2009-06-08
Blood pressure measurement should follow recommended procedures, otherwise incorrect diagnoses will follow resulting in incorrect treatment and cardiovascular events. The standard for clinical blood pressure measurement is the auscultatory method, but mercury sphygmomanometers can still be used. Blood pressure measurement at home using 24-hour or home blood pressure has documented a better reproducibility and predicts cardiovascular event more precisely than clinic blood pressure. 24-hour measurement or home blood pressure measurement should be performed in patients with suspected hypertension without hypertensive organ damage to reveal white-coat hypertension.
Confidence crisis of results in biomechanics research.
Knudson, Duane
2017-11-01
Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.
International migration, 1995: some reflections on an exceptional year.
Bedford, R
1996-10-01
"This paper examines the 1995 international migration statistics in the context of New Zealand's immigration policy, and with reference to the impact of migration on population change in 1995. Particular attention is focused on trying to unravel and interpret the statistics relating to net migration. Considerable confusion has arisen in the public debate about immigration because of uniformed and, at times, quite misleading use of information supplied by Statistics New Zealand and the Department of Labour.... This is a reprinted version of an article originally published in the New Zealand Journal of Geography in April 1996. The article has been reprinted because a number of tables in the earlier version were incorrectly reproduced. Any inconvenience caused by this problem is regretted." excerpt
Mulder, Emma R; de Jong, Remko A; Knol, Dirk L; van Schijndel, Ronald A; Cover, Keith S; Visser, Pieter J; Barkhof, Frederik; Vrenken, Hugo
2014-05-15
To measure hippocampal volume change in Alzheimer's disease (AD) or mild cognitive impairment (MCI), expert manual delineation is often used because of its supposed accuracy. It has been suggested that expert outlining yields poorer reproducibility as compared to automated methods, but this has not been investigated. To determine the reproducibilities of expert manual outlining and two common automated methods for measuring hippocampal atrophy rates in healthy aging, MCI and AD. From the Alzheimer's Disease Neuroimaging Initiative (ADNI), 80 subjects were selected: 20 patients with AD, 40 patients with mild cognitive impairment (MCI) and 20 healthy controls (HCs). Left and right hippocampal volume change between baseline and month-12 visit was assessed by using expert manual delineation, and by the automated software packages FreeSurfer (longitudinal processing stream) and FIRST. To assess reproducibility of the measured hippocampal volume change, both back-to-back (BTB) MPRAGE scans available for each visit were analyzed. Hippocampal volume change was expressed in μL, and as a percentage of baseline volume. Reproducibility of the 1-year hippocampal volume change was estimated from the BTB measurements by using linear mixed model to calculate the limits of agreement (LoA) of each method, reflecting its measurement uncertainty. Using the delta method, approximate p-values were calculated for the pairwise comparisons between methods. Statistical analyses were performed both with inclusion and exclusion of visibly incorrect segmentations. Visibly incorrect automated segmentation in either one or both scans of a longitudinal scan pair occurred in 7.5% of the hippocampi for FreeSurfer and in 6.9% of the hippocampi for FIRST. After excluding these failed cases, reproducibility analysis for 1-year percentage volume change yielded LoA of ±7.2% for FreeSurfer, ±9.7% for expert manual delineation, and ±10.0% for FIRST. Methods ranked the same for reproducibility of 1-year μL volume change, with LoA of ±218 μL for FreeSurfer, ±319 μL for expert manual delineation, and ±333 μL for FIRST. Approximate p-values indicated that reproducibility was better for FreeSurfer than for manual or FIRST, and that manual and FIRST did not differ. Inclusion of failed automated segmentations led to worsening of reproducibility of both automated methods for 1-year raw and percentage volume change. Quantitative reproducibility values of 1-year microliter and percentage hippocampal volume change were roughly similar between expert manual outlining, FIRST and FreeSurfer, but FreeSurfer reproducibility was statistically significantly superior to both manual outlining and FIRST after exclusion of failed segmentations. Copyright © 2014 Elsevier Inc. All rights reserved.
Mathysen, Danny G P; Aclimandos, Wagih; Roelant, Ella; Wouters, Kristien; Creuzot-Garcher, Catherine; Ringens, Peter J; Hawlina, Marko; Tassignon, Marie-José
2013-11-01
To investigate whether introduction of item-response theory (IRT) analysis, in parallel to the 'traditional' statistical analysis methods available for performance evaluation of multiple T/F items as used in the European Board of Ophthalmology Diploma (EBOD) examination, has proved beneficial, and secondly, to study whether the overall assessment performance of the current written part of EBOD is sufficiently high (KR-20≥ 0.90) to be kept as examination format in future EBOD editions. 'Traditional' analysis methods for individual MCQ item performance comprise P-statistics, Rit-statistics and item discrimination, while overall reliability is evaluated through KR-20 for multiple T/F items. The additional set of statistical analysis methods for the evaluation of EBOD comprises mainly IRT analysis. These analysis techniques are used to monitor whether the introduction of negative marking for incorrect answers (since EBOD 2010) has a positive influence on the statistical performance of EBOD as a whole and its individual test items in particular. Item-response theory analysis demonstrated that item performance parameters should not be evaluated individually, but should be related to one another. Before the introduction of negative marking, the overall EBOD reliability (KR-20) was good though with room for improvement (EBOD 2008: 0.81; EBOD 2009: 0.78). After the introduction of negative marking, the overall reliability of EBOD improved significantly (EBOD 2010: 0.92; EBOD 2011:0.91; EBOD 2012: 0.91). Although many statistical performance parameters are available to evaluate individual items, our study demonstrates that the overall reliability assessment remains the only crucial parameter to be evaluated allowing comparison. While individual item performance analysis is worthwhile to undertake as secondary analysis, drawing final conclusions seems to be more difficult. Performance parameters need to be related, as shown by IRT analysis. Therefore, IRT analysis has proved beneficial for the statistical analysis of EBOD. Introduction of negative marking has led to a significant increase in the reliability (KR-20 > 0.90), indicating that the current examination format can be kept for future EBOD examinations. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Koczorowski, Maciej; Gedrange, Tomasz; Koczorowski, Ryszard
2012-03-20
Neuromuscular disorders lead to an imbalance in the position of the jaw. The aim of this study has been to analyse gnostic sensibility in subjects with partial anterior open bite and the incorrect position of the tongue. The study involved 20 subjects with partial anterior open bite and an incorrect tongue position. The control group consisted of 20 individuals with correct occlusion and tongue position. The basic study method was a stereognostic examination using 4 silicon shapes - a square, triangle, circle and semicircle. The accuracy of shape identification and the time that the subjects needed to identify the shapes were analysed before and after the tip of the tongue was anaesthetized. Results showed that correct identification of the shapes was 7.4% worse in the study group than in the control group and that the difference was greatest when the tip of the tongue was anaesthetized - 28.8%. The time needed to identify the shapes was shorter in the study group than in the control group. The results indicate that people with partial anterior open bite and incorrect tongue position exhibit impaired gnostic sensibility, especially at the tip of the tongue. Impaired gnostic sensibility, which is a symptom of the disturbed sensomotoric correlation of the tongue, leads to the tongue's incorrect position in the process of swallowing and speaking. Copyright © 2011 Elsevier GmbH. All rights reserved.
Predicting Macroscale Effects Through Nanoscale Features
2012-01-01
errors become incorrectly computed by the basic OLS technique. To test for the presence of heteroscedasticity the Breusch - Pagan / Cook-Weisberg test ...is employed with the test statistics distributed as 2 with the degrees of freedom equal to the number of regressors. The Breusch - Pagan / Cook...between shock sensitivity and Sm does not exhibit any heteroscedasticity. The Breusch - Pagan / Cook-Weisberg test provides 2(1)=1.73, which
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
CAD scheme for detection of hemorrhages and exudates in ocular fundus images
NASA Astrophysics Data System (ADS)
Hatanaka, Yuji; Nakagawa, Toshiaki; Hayashi, Yoshinori; Mizukusa, Yutaka; Fujita, Akihiro; Kakogawa, Masakatsu; Kawase, Kazuhide; Hara, Takeshi; Fujita, Hiroshi
2007-03-01
This paper describes a method for detecting hemorrhages and exudates in ocular fundus images. The detection of hemorrhages and exudates is important in order to diagnose diabetic retinopathy. Diabetic retinopathy is one of the most significant factors contributing to blindness, and early detection and treatment are important. In this study, hemorrhages and exudates were automatically detected in fundus images without using fluorescein angiograms. Subsequently, the blood vessel regions incorrectly detected as hemorrhages were eliminated by first examining the structure of the blood vessels and then evaluating the length-to-width ratio. Finally, the false positives were eliminated by checking the following features extracted from candidate images: the number of pixels, contrast, 13 features calculated from the co-occurrence matrix, two features based on gray-level difference statistics, and two features calculated from the extrema method. The sensitivity of detecting hemorrhages in the fundus images was 85% and that of detecting exudates was 77%. Our fully automated scheme could accurately detect hemorrhages and exudates.
When ab ≠ c - c': published errors in the reports of single-mediator models.
Petrocelli, John V; Clarkson, Joshua J; Whitmire, Melanie B; Moon, Paul E
2013-06-01
Accurate reports of mediation analyses are critical to the assessment of inferences related to causality, since these inferences are consequential for both the evaluation of previous research (e.g., meta-analyses) and the progression of future research. However, upon reexamination, approximately 15% of published articles in psychology contain at least one incorrect statistical conclusion (Bakker & Wicherts, Behavior research methods, 43, 666-678 2011), disparities that beget the question of inaccuracy in mediation reports. To quantify this question of inaccuracy, articles reporting standard use of single-mediator models in three high-impact journals in personality and social psychology during 2011 were examined. More than 24% of the 156 models coded failed an equivalence test (i.e., ab = c - c'), suggesting that one or more regression coefficients in mediation analyses are frequently misreported. The authors cite common sources of errors, provide recommendations for enhanced accuracy in reports of single-mediator models, and discuss implications for alternative methods.
Babatunde, Oluwole Adeyemi; Ibirongbe, Demilade Olusola; Omede, Owen; Babatunde, Olubukola Oluwakemi; Durowade, Kabir Adekunle; Salaudeen, Adekunle Ganiyu; Akande, Tanimola Makanjuola
2016-01-01
Introduction Unintended pregnancy and unsafe abortion pose a major reproductive health challenge to adolescents. Emergency contraception is safe and effective in preventing unplanned pregnancy. The objective of this study was to assess the student's knowledge and use of emergency contraception. Methods This cross-sectional study was carried out in Ilorin, Nigeria, using multi-stage sampling method. Data was collected using pre-tested semi-structured self-administered questionnaire. Knowledge was scored and analysed. SPSS version 21.0 was used for data analysis. A p-value <0.05 was considered statistically significant. Results 27.8% of the respondents had good knowledge of emergency contraception. Majority of respondents (87.2%) had never used emergency contraception. Majority of those who had ever used emergency contraception (85.7%) used it incorrectly, using it more than 72 hours after sexual intercourse (p=0.928). Conclusion Knowledge about Emergency contraception and prevalence of use were low. Contraceptive education should be introduced early in the school curriculum for adolescents. PMID:27217897
A statistical model for interpreting computerized dynamic posturography data
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.
2002-01-01
Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.
Xu, Xuemiao; Jin, Qiang; Zhou, Le; Qin, Jing; Wong, Tien-Tsin; Han, Guoqiang
2015-02-12
We propose a novel biometric recognition method that identifies the inner knuckle print (IKP). It is robust enough to confront uncontrolled lighting conditions, pose variations and low imaging quality. Such robustness is crucial for its application on portable devices equipped with consumer-level cameras. We achieve this robustness by two means. First, we propose a novel feature extraction scheme that highlights the salient structure and suppresses incorrect and/or unwanted features. The extracted IKP features retain simple geometry and morphology and reduce the interference of illumination. Second, to counteract the deformation induced by different hand orientations, we propose a novel structure-context descriptor based on local statistics. To our best knowledge, we are the first to simultaneously consider the illumination invariance and deformation tolerance for appearance-based low-resolution hand biometrics. Settings in previous works are more restrictive. They made strong assumptions either about the illumination condition or the restrictive hand orientation. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of recognition accuracy, especially under uncontrolled lighting conditions and the flexible hand orientation requirement.
Xu, Xuemiao; Jin, Qiang; Zhou, Le; Qin, Jing; Wong, Tien-Tsin; Han, Guoqiang
2015-01-01
We propose a novel biometric recognition method that identifies the inner knuckle print (IKP). It is robust enough to confront uncontrolled lighting conditions, pose variations and low imaging quality. Such robustness is crucial for its application on portable devices equipped with consumer-level cameras. We achieve this robustness by two means. First, we propose a novel feature extraction scheme that highlights the salient structure and suppresses incorrect and/or unwanted features. The extracted IKP features retain simple geometry and morphology and reduce the interference of illumination. Second, to counteract the deformation induced by different hand orientations, we propose a novel structure-context descriptor based on local statistics. To our best knowledge, we are the first to simultaneously consider the illumination invariance and deformation tolerance for appearance-based low-resolution hand biometrics. Settings in previous works are more restrictive. They made strong assumptions either about the illumination condition or the restrictive hand orientation. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of recognition accuracy, especially under uncontrolled lighting conditions and the flexible hand orientation requirement. PMID:25686317
Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.
2014-01-01
A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941
Phipps, Denham L; Tam, W Vanessa; Ashcroft, Darren M
2017-03-01
To explore the combined use of a critical incident database and work domain analysis to understand patient safety issues in a health-care setting. A retrospective review was conducted of incidents reported to the UK National Reporting and Learning System (NRLS) that involved community pharmacy between April 2005 and August 2010. A work domain analysis of community pharmacy was constructed using observational data from 5 community pharmacies, technical documentation, and a focus group with 6 pharmacists. Reports from the NRLS were mapped onto the model generated by the work domain analysis. Approximately 14,709 incident reports meeting the selection criteria were retrieved from the NRLS. Descriptive statistical analysis of these reports found that almost all of the incidents involved medication and that the most frequently occurring error types were dose/strength errors, incorrect medication, and incorrect formulation. The work domain analysis identified 4 overall purposes for community pharmacy: business viability, health promotion and clinical services, provision of medication, and use of medication. These purposes were served by lower-order characteristics of the work system (such as the functions, processes and objects). The tasks most frequently implicated in the incident reports were those involving medication storage, assembly, or patient medication records. Combining the insights from different analytical methods improves understanding of patient safety problems. Incident reporting data can be used to identify general patterns, whereas the work domain analysis can generate information about the contextual factors that surround a critical task.
Davern, Michael; Klerman, Jacob Alex; Baugh, David K; Call, Kathleen Thiede; Greenberg, George D
2009-01-01
Objective To assess reasons why survey estimates of Medicaid enrollment are 43 percent lower than raw Medicaid program enrollment counts (i.e., “Medicaid undercount”). Data Sources Linked 2000–2002 Medicaid Statistical Information System (MSIS) and the 2001–2002 Current Population Survey (CPS). Data Collection Methods Centers for Medicare and Medicaid Services provided the Census Bureau with its MSIS file. The Census Bureau linked the MSIS to the CPS data within its secure data analysis facilities. Study Design We analyzed how often Medicaid enrollees incorrectly answer the CPS health insurance item and imperfect concept alignment (e.g., inclusion in the MSIS of people who are not included in the CPS sample frame and people who were enrolled in Medicaid in more than one state during the year). Principal Findings The extent to which the Medicaid enrollee data were adjusted for imperfect concept alignment reduces the raw Medicaid undercount considerably (by 12 percentage points). However, survey response errors play an even larger role with 43 percent of Medicaid enrollees answering the CPS as though they were not enrolled and 17 percent reported being uninsured. Conclusions The CPS is widely used for health policy analysis but is a poor measure of Medicaid enrollment at any time during the year because many people who are enrolled in Medicaid fail to report it and may be incorrectly coded as being uninsured. This discrepancy should be considered when using the CPS for policy research. PMID:19187185
[Roadside observation on the use of safety belt in Guangzhou and Nanning cites of China].
Li, Li-ping; Stevenson, Mark; Ivers, Rebecca; Zhou, Ying
2006-08-01
To determine the rates of correct use of safety belt (CUSB) among drivers and front seat passengers in Guangzhou and Nanning through roadside observation and to provide scientific evidence for the development of intervention plan and to strengthen road safety law enforcement. Observational sites were randomly selected from three road types (Highway, Main Street and Subordinate Street). Targeted automobiles were observed at each site at four different times and uniformed checklists were used to record safety belt use during observations. Within each vehicle, belt use by drivers of different sex, road type, workday/weekend, day/night and seating position were calculated. Data was analyzed, using Chi-square tests to compare the statistic significance. (1)The rate of CUSB and non-use rate among drivers were higher in Nanning than in Guangzhou (P= 0.00) but the rate of incorrect use was on the contrarary. (2) The rate of CUSB by front seat passengers in Guangzhou was higher than that in Nanning (P = 0.04); as well as the rate of (P = 0.00) incorrect use while the non-use rate was on the contrarary. (3)In general, the rate of CUSB was higher on highways than on local streets (P = 0.00). (4) The CUSB rate of drivers and front seat passengers was higher at daytime than at night (P = 0.00), and the rate of incorrect use was higher at working days than weekends (P = 0.00). (5) The CUSB rate was higher for female drivers than for males in Guangzhou (P = 0.00), but there no statistical significance was found in Nanning (P = 0.21). Results suggested that intervention actions should be undertaken to raise the awareness of the importance of safety belt use. Effective public information and education programs, law enforcement and mandatory safety belt use, prioritizing programs on people neglegent to the importance are necessary to increase the safety belt use and to decrease the mortality and injuries caused by traffic accidents.
From Here to There: Lessons from an Integrative Patient Safety Project in Rural Health Care Settings
2005-05-01
errors and patient falls. The medication errors generally involved one of three issues: incorrect dose, time, or port. Although most of the health...statistics about trends; and the summary of events related to patient safety and medical errors.12 The interplay among factors These three domains...the medical staff. We explored these issues further when administering a staff-wide Patient Safety Survey. Responses mirrored the findings that
Statistical uncertainty in the Medicare shared savings program.
DeLia, Derek; Hoover, Donald; Cantor, Joel C
2012-01-01
Analyze statistical risks facing CMS and Accountable Care Organizations (ACOs) under the Medicare Shared Savings Program (MSSP). We calculate the probability that shared savings formulas lead to inappropriate payment, payment denial, and/or financial penalties, assuming that ACOs generate real savings in Medicare spending ranging from 0-10%. We also calculate expected payments from CMS to ACOs under these scenarios. The probability of an incorrect outcome is heavily dependent on ACO enrollment size. For example, in the MSSP two-sided model, an ACO with 5,000 enrollees that keeps spending constant faces a 0.24 probability of being inappropriately rewarded for savings and a 0.26 probability of paying an undeserved penalty for increased spending. For an ACO with 50,000 enrollees, both of these probabilities of incorrect outcomes are equal to 0.02. The probability of inappropriate payment denial declines as real ACO savings increase. Still, for ACOs with 5,000 patients, the probability of denial is at least 0.15 even when true savings are 5-7%. Depending on ACO size and the real ACO savings rate, expected ACO payments vary from $115,000 to $35.3 million. Our analysis indicates there may be greater statistical uncertainty in the MSSP than previously recognized. CMS and ACOs will have to consider this uncertainty in their financial, administrative, and care management planning. We also suggest analytic strategies that can be used to refine ACO payment formulas in the longer term to ensure that the MSSP (and other ACO initiatives that will be influenced by it) work as efficiently as possible.
NASA Astrophysics Data System (ADS)
Barrie, A.; Gliese, U.; Gershman, D. J.; Avanov, L. A.; Rager, A. C.; Pollock, C. J.; Dorelli, J.
2015-12-01
The Fast Plasma Investigation (FPI) on the Magnetospheric Multiscale mission (MMS) combines data from eight spectrometers, each with four deflection states, into a single map of the sky. Any systematic discontinuity, artifact, noise source, etc. present in this map may be incorrectly interpreted as legitimate data and incorrect conclusions reached. For this reason it is desirable to have all spectrometers return the same output for a given input, and for this output to be low in noise sources or other errors. While many missions use statistical analyses of data to calibrate instruments in flight, this process is difficult with FPI for two reasons: 1. Only a small fraction of high resolution data is downloaded to the ground due to bandwidth limitations and 2: The data that is downloaded is, by definition, scientifically interesting and therefore not ideal for calibration. FPI uses a suite of new tools to calibrate in flight. A new method for detection system ground calibration has been developed involving sweeping the detection threshold to fully define the pulse height distribution. This method has now been extended for use in flight as a means to calibrate MCP voltage and threshold (together forming the operating point) of the Dual Electron Spectrometers (DES) and Dual Ion Spectrometers (DIS). A method of comparing higher energy data (which has low fractional voltage error) to lower energy data (which has a higher fractional voltage error) will be used to calibrate the high voltage outputs. Finally, a comparison of pitch angle distributions will be used to find remaining discrepancies among sensors. Initial flight results from the four MMS observatories will be discussed here. Specifically, data from initial commissioning, inter-instrument cross calibration and interference testing, and initial Phase1A routine calibration results. Success and performance of the in flight calibration as well as deviation from the ground calibration will be discussed.
A geostatistical state-space model of animal densities for stream networks.
Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H
2018-06-21
Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
The Problem of Auto-Correlation in Parasitology
Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick
2012-01-01
Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865
Van Nuffel, A; Tuyttens, F A M; Van Dongen, S; Talloen, W; Van Poucke, E; Sonck, B; Lens, L
2007-12-01
Nonidentical development of bilateral traits due to disturbing genetic or developmental factors is called fluctuating asymmetry (FA) if such deviations are continuously distributed. Fluctuating asymmetry is believed to be a reliable indicator of the fitness and welfare of an animal. Despite an increasing body of research, the link between FA and animal performance or welfare is reported to be inconsistent, possibly, among other reasons, due to inaccurate measuring protocols or incorrect statistical analyses. This paper reviews problems of interpreting FA results in poultry and provides guidelines for the measurement and analysis of FA, applied to broilers. A wide range of morphological traits were measured by 7 different techniques (ranging from measurements on living broilers or intact carcasses to X-rays, bones, and digital images) and evaluated for their applicability to estimate FA. Following 4 selection criteria (significant FA, absence of directional asymmetry or antisymmetry, absence of between-trait correlation in signed FA values, and high signal-to-noise ratio), from 3 to 14 measurements per method were found suitable for estimating the degree of FA. The accuracy of FA estimates was positively related to the complexity and time investment of the measuring method. In addition, our study clearly shows the importance of securing adequate statistical power when designing FA studies. Repeatability analyses of FA estimates indicated the need for larger sample sizes, more repeated measurements, or both, than are commonly used in FA studies.
Prolonged grief: where to after Diagnostic and Statistical Manual of Mental Disorders, 5th Edition?
Bryant, Richard A
2014-01-01
Although there is much evidence for the construct of prolonged grief, there was much controversy over the proposal to introduce a prolonged grief diagnosis into Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5), and it was finally rejected as a diagnosis in DSM-5. This review outlines the evidence for and against the diagnosis, and highlights the implications of the DSM-5 decision. Convergent evidence indicates that prolonged grief characterized by persistently severe yearning for the deceased is a distinct construct from bereavement-related depression and anxiety, is associated with marked functional impairment, is responsive to targeted treatments for prolonged grief, and has been validated across different cultures, age groups, and types of bereavement. Although DSM-5 has rejected the construct as a formal diagnosis, evidence continues to emerge on related mechanisms, including maladaptive appraisals, memory and attentional processes, immunological and arousal responses, and neural circuitry. It is most likely that the International Classification of Diseases (ICD-11) will introduce a diagnosis to recognize prolonged grief, even though DSM-5 has decided against this option. It is probable that the DSM-5 decision may result in more prolonged grief patients being incorrectly diagnosed with depression after bereavement and possibly incorrectly treated. The DSM-5 decision is unlikely to impact on future research agendas.
Visualizing Statistical Mix Effects and Simpson's Paradox.
Armstrong, Zan; Wattenberg, Martin
2014-12-01
We discuss how "mix effects" can surprise users of visualizations and potentially lead them to incorrect conclusions. This statistical issue (also known as "omitted variable bias" or, in extreme cases, as "Simpson's paradox") is widespread and can affect any visualization in which the quantity of interest is an aggregated value such as a weighted sum or average. Our first contribution is to document how mix effects can be a serious issue for visualizations, and we analyze how mix effects can cause problems in a variety of popular visualization techniques, from bar charts to treemaps. Our second contribution is a new technique, the "comet chart," that is meant to ameliorate some of these issues.
Investigating species co-occurrence patterns when species are detected imperfectly
MacKenzie, D.I.; Bailey, L.L.; Nichols, J.D.
2004-01-01
1. Over the last 30 years there has been a great deal of interest in investigating patterns of species co-occurrence across a number of locations, which has led to the development of numerous methods to determine whether there is evidence that a particular pattern may not have occurred by random chance. 2. A key aspect that seems to have been largely overlooked is the possibility that species may not always be detected at a location when present, which leads to 'false absences' in a species presence/absence matrix that may cause incorrect inferences to be made about co-occurrence patterns. Furthermore, many of the published methods for investigating patterns of species co-occurrence do not account for potential differences in the site characteristics that may partially (at least) explain non-random patterns (e.g. due to species having similar/different habitat preferences). 3. Here we present a statistical method for modelling co-occurrence patterns between species while accounting for imperfect detection and site characteristics. This method requires that multiple presence/absence surveys for the species be conducted over a reasonably short period of time at most sites. The method yields unbiased estimates of probabilities of occurrence, and is practical when the number of species is small (< 4). 4. To illustrate the method we consider data collected on two terrestrial salamander species, Plethodonjordani and members of the Plethodon glutinosus complex, collected in the Great Smoky Mountains National Park, USA. We find no evidence that the species do not occur independently at sites once site elevation has been allowed for, although we find some evidence of a statistical interaction between species in terms of detectability that we suggest may be due to changes in relative abundances.
Lee, Chi Hyun; Luo, Xianghua; Huang, Chiung-Yu; DeFor, Todd E; Brunstein, Claudio G; Weisdorf, Daniel J
2016-06-01
Infection is one of the most common complications after hematopoietic cell transplantation. Many patients experience infectious complications repeatedly after transplant. Existing statistical methods for recurrent gap time data typically assume that patients are enrolled due to the occurrence of an event of interest, and subsequently experience recurrent events of the same type; moreover, for one-sample estimation, the gap times between consecutive events are usually assumed to be identically distributed. Applying these methods to analyze the post-transplant infection data will inevitably lead to incorrect inferential results because the time from transplant to the first infection has a different biological meaning than the gap times between consecutive recurrent infections. Some unbiased yet inefficient methods include univariate survival analysis methods based on data from the first infection or bivariate serial event data methods based on the first and second infections. In this article, we propose a nonparametric estimator of the joint distribution of time from transplant to the first infection and the gap times between consecutive infections. The proposed estimator takes into account the potentially different distributions of the two types of gap times and better uses the recurrent infection data. Asymptotic properties of the proposed estimators are established. © 2015, The International Biometric Society.
Lee, Chi Hyun; Huang, Chiung-Yu; DeFor, Todd E.; Brunstein, Claudio G.; Weisdorf, Daniel J.
2015-01-01
Summary Infection is one of the most common complications after hematopoietic cell transplantation. Many patients experience infectious complications repeatedly after transplant. Existing statistical methods for recurrent gap time data typically assume that patients are enrolled due to the occurrence of an event of interest, and subsequently experience recurrent events of the same type; moreover, for one-sample estimation, the gap times between consecutive events are usually assumed to be identically distributed. Applying these methods to analyze the post-transplant infection data will inevitably lead to incorrect inferential results because the time from transplant to the first infection has a different biological meaning than the gap times between consecutive recurrent infections. Some unbiased yet inefficient methods include univariate survival analysis methods based on data from the first infection or bivariate serial event data methods based on the first and second infections. In this paper, we propose a nonparametric estimator of the joint distribution of time from transplant to the first infection and the gap times between consecutive infections. The proposed estimator takes into account the potentially different distributions of the two types of gap times and better uses the recurrent infection data. Asymptotic properties of the proposed estimators are established. PMID:26575402
Selected Problems of Applying the Law in Adaptation and Modernization of Buildings in Poland
NASA Astrophysics Data System (ADS)
Korbel, Wojciech
2016-06-01
Chosen problems of law implementation in the contemporary process of building's modernization in Poland. One of the major problems in the contemporary process of building's modernization in Poland is the pluralism of different interpretations of chosen legal terms, existing in the contemporary building code. Incorrect interpretation, results in the incorrect application to the authorities for the proper building permit and as the effect, it causes the lost of time and money. The article tries to identify some of these problems and seeks the solution to solve them, through the evolutionary method of building law creation.
Lee, Gregory P; Park, Yong D; Hempel, Ann; Westerveld, Michael; Loring, David W
2002-09-01
Because the capacity of intracarotid amobarbital (Wada) memory assessment to predict seizure-onset laterality in children has not been thoroughly investigated, three comprehensive epilepsy surgery centers pooled their data and examined Wada memory asymmetries to predict side of seizure onset in children being considered for epilepsy surgery. One hundred fifty-two children with intractable epilepsy underwent Wada testing. Although the type and number of memory stimuli and methods varied at each institution, all children were presented with six to 10 items soon after amobarbital injection. After return to neurologic baseline, recognition memory for the stimuli was assessed. Seizure onset was determined by simultaneous video-EEG recordings of multiple seizures. In children with unilateral temporal lobe seizures (n = 87), Wada memory asymmetries accurately predicted seizure laterality to a statistically significant degree. Wada memory asymmetries also correctly predicted side of seizure onset in children with extra-temporal lobe seizures (n = 65). Although individual patient prediction accuracy was statistically significant in temporal lobe cases, onset laterality was incorrectly predicted in < or =52% of children with left temporal lobe seizure onset, depending on the methods and asymmetry criterion used. There also were significant differences between Wada prediction accuracy across the three epilepsy centers. Results suggest that Wada memory assessment is useful in predicting side of seizure onset in many children. However, Wada memory asymmetries should be interpreted more cautiously in children than in adults.
Net Improvement of Correct Answers to Therapy Questions After PubMed Searches: Pre/Post Comparison
Keepanasseril, Arun
2013-01-01
Background Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. Objective To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). Methods We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. Results We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. Conclusions PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers. PMID:24217329
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
The importance of topographically corrected null models for analyzing ecological point processes.
McDowall, Philip; Lynch, Heather J
2017-07-01
Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.
Williams, Mobolaji
2018-01-01
The field of disordered systems in statistical physics provides many simple models in which the competing influences of thermal and nonthermal disorder lead to new phases and nontrivial thermal behavior of order parameters. In this paper, we add a model to the subject by considering a disordered system where the state space consists of various orderings of a list. As in spin glasses, the disorder of such "permutation glasses" arises from a parameter in the Hamiltonian being drawn from a distribution of possible values, thus allowing nominally "incorrect orderings" to have lower energies than "correct orderings" in the space of permutations. We analyze a Gaussian, uniform, and symmetric Bernoulli distribution of energy costs, and, by employing Jensen's inequality, derive a simple condition requiring the permutation glass to always transition to the correctly ordered state at a temperature lower than that of the nondisordered system, provided that this correctly ordered state is accessible. We in turn find that in order for the correctly ordered state to be accessible, the probability that an incorrectly ordered component is energetically favored must be less than the inverse of the number of components in the system. We show that all of these results are consistent with a replica symmetric ansatz of the system. We conclude by arguing that there is no distinct permutation glass phase for the simplest model considered here and by discussing how to extend the analysis to more complex Hamiltonians capable of novel phase behavior and replica symmetry breaking. Finally, we outline an apparent correspondence between the presented system and a discrete-energy-level fermion gas. In all, the investigation introduces a class of exactly soluble models into statistical mechanics and provides a fertile ground to investigate statistical models of disorder.
Data Adjustments for TRACE-P, INTEX-A and INTEX-B
Atmospheric Science Data Center
2013-08-06
... that time, we have done repeated calibrations with two other methods: measuring the production of ozone from oxygen photolysis and the ... this notification. All of our four calibration methods indicate that the PMT calibration is incorrect, but they differ in the ...
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2014 CFR
2014-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2012 CFR
2012-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2013 CFR
2013-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2011 CFR
2011-10-01
... technically incorrect. Because scientific and technical correctness is often a matter of degree rather than...), appellants are required to demonstrate that alternative methods or applications result in more correct... due to error in application of hydrologic, hydraulic or other methods or use of inferior data in...
Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan
2018-02-01
The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Using Excel's Solver Function to Facilitate Reciprocal Service Department Cost Allocations
ERIC Educational Resources Information Center
Leese, Wallace R.
2013-01-01
The reciprocal method of service department cost allocation requires linear equations to be solved simultaneously. These computations are often so complex as to cause the abandonment of the reciprocal method in favor of the less sophisticated and theoretically incorrect direct or step-down methods. This article illustrates how Excel's Solver…
Peschke, A; Blunck, U; Roulet, J F
2000-10-01
To determine the influence of incorrectly performed steps during the application of the water-based adhesive system OptiBond FL on the marginal adaptation of Class V composite restorations. In 96 extracted human teeth Class V cavities were prepared. Half of the margin length was situated in dentin. The teeth were randomly divided into 12 groups. The cavities were filled with Prodigy resin-based composite in combination with OptiBond FL according to the manufacturer's instructions (Group O) and including several incorrect application steps: Group A: prolonged etching (60 s); Group B: no etching of dentin; Group C: excessive drying after etching; Group D: short rewetting after excessive drying; Group E: air drying and rewetting; Group F: blot drying; Group G: saliva contamination; Group H: application of primer and immediate drying; group I: application of only primer; group J: application of only adhesive; Group K: no light curing of the adhesive before the application of composite. After thermocycling, replicas were taken and the margins were quantitatively analyzed in the SEM. Statistical analysis of the results was performed using non-parametric procedures. With exception of the "rewetting groups" (D and E) and the group with saliva contamination (G), all other application procedures showed a significantly higher amount of marginal openings in dentin compared to the control group (O). Margin quality in enamel was only affected when the primer was not applied.
Long-Branch Attraction Bias and Inconsistency in Bayesian Phylogenetics
Kolaczkowski, Bryan; Thornton, Joseph W.
2009-01-01
Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias—which is apparent under both controlled simulation conditions and in analyses of empirical sequence data—also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages—that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis. PMID:20011052
Long-branch attraction bias and inconsistency in Bayesian phylogenetics.
Kolaczkowski, Bryan; Thornton, Joseph W
2009-12-09
Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias--which is apparent under both controlled simulation conditions and in analyses of empirical sequence data--also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages--that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis.
Cheung, Gordon; Goonewardene, Mithran Suresh; Islam, Syed Mohammed Shamsul; Murray, Kevin; Koong, Bernard
2013-05-01
To assess the validity of using jugale (J) and Antegonion (Ag) on Posterior-Anterior cephalograms (PAC) as landmarks for transverse intermaxillary analysis when compared with Cone Beam Computed Tomography (CBCT). Conventional PAC and CBCT images were taken of 28 dry skulls. Craniometric measurements between the bilateral landmarks, Antegonion and Jugale, were obtained from the skulls using a microscribe and recorded as the base standard. The corresponding andmarks were identified and measured on CBCT and PAC and compared with the base standard measurements. The accuracy and reliability of the measurements were statistically evaluated and the validity was assessed by comparing the ability of the two image modalities to accurately diagnose an arbitrarily selected J-J/Ag-Ag ratio. All measurements were repeated at least 7 weeks apart. Intra-class correlations (ICC) and Bland-Altman plots were used to analyse the data. All three methods were shown to be reliable as all had a mean error of less than 0.5 mm between repeated measurements. When compared with the base standard, CBCT measurements were shown to have higher agreement (ICC: 0.861-0.964) compared with measurements taken from PAC (ICC: 0.794-0.796). When the arbitrary J-J/Ag-Ag ratio was assessed, 18 per cent of cases were incorrectly diagnosed with a transverse discrepancy on the PAC compared with the CBCT which incorrectly diagnosed 8.7 per cent. CBCT was shown to be more reliable in assessing intermaxillary transverse discrepancy compared with PAC when using J-J/Ag-Ag ratios.
Beck, Dano W; Lalota, Marlene; Metsch, Lisa R; Cardenas, Gabriel A; Forrest, David W; Lieb, Spencer; Liberti, Thomas M
2012-04-01
Misconceptions about HIV transmission and prevention may inhibit individuals' accurate assessment of their level of risk. We used venue-based sampling to conduct a cross-sectional study of heterosexually active adults (N = 1,221) within areas exhibiting high poverty and HIV/AIDS rates in Miami-Dade and Broward counties in 2007. Two logistic regression analyses identified correlates of holding inaccurate beliefs about HIV transmission and prevention. Belief in incorrect HIV prevention methods (27.2%) and modes of transmission (38.5%) was common. Having at least one incorrect prevention belief was associated with being Hispanic compared to white (non-Hispanic), being depressed, and not knowing one's HIV status. Having at least one incorrect transmission belief was associated with being younger, heavy alcohol use, being depressed, not having seen a physician in the past 12 months, and not knowing one's HIV status. Among low-income heterosexuals, HIV prevention and transmission myths are widespread. Debunking them could have HIV prevention value.
WASP (Write a Scientific Paper) using Excel - 4: Histograms.
Grech, Victor
2018-02-01
Plotting data into graphs is a crucial step in data analysis as part of an initial descriptive statistics exercise since it gives the researcher an overview of the shape and nature of the data. Outlier values may also be identified, and these may be incorrect data, or true and important outliers. This paper explains how to access Microsoft Excel's Analysis Toolpak and provides some pointers for the utilisation of the histogram tool within the Toolpak. Copyright © 2018. Published by Elsevier B.V.
42 CFR 489.41 - Timing and methods of handling.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Timing and methods of handling. 489.41 Section 489.41 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION PROVIDER AGREEMENTS AND SUPPLIER APPROVAL Handling of Incorrect...
42 CFR 489.41 - Timing and methods of handling.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 5 2011-10-01 2011-10-01 false Timing and methods of handling. 489.41 Section 489.41 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION PROVIDER AGREEMENTS AND SUPPLIER APPROVAL Handling of Incorrect...
Morishita, Junji; Watanabe, Hideyuki; Katsuragawa, Shigehiko; Oda, Nobuhiro; Sukenobu, Yoshiharu; Okazaki, Hiroko; Nakata, Hajime; Doi, Kunio
2005-01-01
The aim of the study was to survey misfiled cases in a picture archiving and communication system environment at two hospitals and to demonstrate the potential usefulness of an automated patient recognition method for posteroanterior chest radiographs based on a template-matching technique designed to prevent filing errors. We surveyed misfiled cases obtained from different modalities in one hospital for 25 months, and misfiled cases of chest radiographs in another hospital for 17 months. For investigating the usefulness of an automated patient recognition and identification method for chest radiographs, a prospective study has been completed in clinical settings at the latter hospital. The total numbers of misfiled cases for different modalities in one hospital and for chest radiographs in another hospital were 327 and 22, respectively. The misfiled cases in the two hospitals were mainly the result of human errors (eg, incorrect manual entries of patient information, incorrect usage of identification cards in which an identification card for the previous patient was used for the next patient's image acquisition). The prospective study indicated the usefulness of the computerized method for discovering misfiled cases with a high performance (ie, an 86.4% correct warning rate for different patients and 1.5% incorrect warning rate for the same patients). We confirmed the occurrence of misfiled cases in the two hospitals. The automated patient recognition and identification method for chest radiographs would be useful in preventing wrong images from being stored in the picture archiving and communication system environment.
Hasegawa, Yoshinori; Shiota, Yuki; Ota, Chihiro; Yoneda, Takeshi; Tahara, Shigeyuki; Maki, Nobukazu; Matsuura, Takahiro; Sekiguchi, Masahiro; Itoigawa, Yoshiaki; Tateishi, Tomohiko; Kaneko, Kazuo
2018-01-01
Objectives To characterise the tackler’s head position during one-on-one tackling in rugby and to determine the incidence of head, neck and shoulder injuries through analysis of game videos, injury records and a questionnaire completed by the tacklers themselves. Methods We randomly selected 28 game videos featuring two university teams in competitions held in 2015 and 2016. Tackles were categorised according to tackler’s head position. The ‘pre-contact phase’ was defined; its duration and the number of steps taken by the ball carrier prior to a tackle were evaluated. Results In total, 3970 tackles, including 317 (8.0%) with the tackler’s head incorrectly positioned (ie, in front of the ball carrier) were examined. Thirty-two head, neck or shoulder injuries occurred for an injury incidence of 0.8% (32/3970). The incidence of injury in tackles with incorrect head positioning was 69.4/1000 tackles; the injury incidence with correct head positioning (ie, behind or to one side of the ball carrier) was 2.7/1000 tackles. Concussions, neck injuries, ‘stingers’ and nasal fractures occurred significantly more often during tackles with incorrect head positioning than during tackles with correct head positioning. Significantly fewer steps were taken before tackles with incorrect head positioning that resulted in injury than before tackles that did not result in injury. Conclusion Tackling with incorrect head position relative to the ball carrier resulted in a significantly higher incidence of concussions, neck injuries, stingers and nasal fractures than tackling with correct head position. Tackles with shorter duration and distance before contact resulted in more injuries. PMID:29162618
A New Model for Inquiry: Is the Scientific Method Dead?
ERIC Educational Resources Information Center
Harwood, William S.
2004-01-01
There has been renewed discussion of the scientific method, with many voices arguing that it presents a very limited or even wholly incorrect image of the way science is really done. At the same time, the idea of a scientific method is pervasive. This article identifies the scientific method as a simple model for the process of scientific inquiry.…
Rohrmeier, Martin A; Cross, Ian
2014-07-01
Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.
Clinically used adhesive ceramic bonding methods: a survey in 2007, 2011, and in 2015.
Klosa, K; Meyer, G; Kern, M
2016-09-01
The objective of the study is to evaluate practices of dentists regarding adhesive cementation of all-ceramic restorations over a period of 8 years. The authors developed a questionnaire regarding adhesive cementation procedures for all-ceramic restorations. Restorations were distinguished between made out of silicate ceramic or oxide ceramic. The questionnaire was handed out to all dentists participating in a local annual dental meeting in Northern Germany. The returned questionnaires were analyzed to identify incorrect cementation procedures based upon current evidence-based technique from the scientific dental literature. The survey was conducted three times in 2007, 2011, and 2015 and their results were compared. For silicate ceramic restorations, 38-69 % of the participants used evidence-based bonding procedures; most of the incorrect bonding methods did not use a silane containing primer. In case of oxide ceramic restorations, most participants did not use air-abrasion prior to bonding. Only a relatively low rate (7-14 %) of dentists used evidence-based dental techniques for bonding oxide ceramics. In adhesive cementation of all-ceramic restorations, the practices of surveyed dentists in Northern Germany revealed high rates of incorrect bonding. During the observation period, the values of evidence-based bonding procedures for oxide ceramics improved while the values for silicate ceramics declined. Based on these results, some survey participants need additional education for adhesive techniques. Neglecting scientifically accepted methods for adhesive cementation of all-ceramic restorations may result in reduced longevity of all-ceramic restorations.
Music listening for maintaining attention of older adults with cognitive impairments.
Gregory, Dianne
2002-01-01
Twelve older adults with cognitive impairments who were participants in weekly community-based group music therapy sessions, 6 older adults in an Alzheimer's caregivers' group, and 6 college student volunteers listened to a 3.5 minute prepared audiotape of instrumental excerpts of patriotic selections. The tape consisted of 7 excerpts ranging from 18 s to 34 s in duration. Each music excerpt was followed by a 7-9 s period of silence, a "wait" excerpt. Listeners were instructed to move a Continuous Response Digital Interface (CRDI) to the name of the music excerpt depicted on the CRDI overlay when they heard a music excerpt. Likewise, they were instructed to move the dial to the word "WAIT" when there was no music. They were also instructed to maintain the dial position for the duration of each music or silence excerpt. Statistical analysis indicated no significant differences between the caregivers' and the college students' group means for total dial changes, correct and incorrect recognitions, correct and incorrect responses to silence excerpts, and reaction times. The mean scores of these 2 groups were combined and compared with the mean scores of the group of elderly adults with cognitive impairments. The mean total dial changes were significantly lower for the listeners with cognitive impairments, resulting in significant differences in all of the other response categories except incorrect recognitions. In addition, their mean absence of response to silence excerpts was significantly higher than their mean absence of responding to music excerpts. Their mean reaction time was significantly slower than the comparison group's reaction time. To evaluate training effects, 10 of the original 12 music therapy participants repeated the listening task with assistance from the therapist (treatment) immediately following the first listening (baseline). A week later the order was reversed for the 2 listening trials. Statistical and graphic analysis of responses between first and second baseline responses indicate significant improvement in responses to silence and music excerpts over the 2 sessions. Applications of the findings to music listening interventions for maintaining attention, eliciting social interaction between clients or caregivers and their patients, and evaluating this population's affective responses to music are discussed.
Begum, Housne Ara; Mascie-Taylor, Cgn; Nahar, Shamsun
2007-01-01
To examine the efficiency of the Bangladesh Integrated Nutritional Program (BINP) in identifying which infants should be supplemented, whether full supplementation was given for the stipulated period of time, and whether the correct exit criteria from the supplementation programme were used. To test whether targeted food supplementation of infants between 6-12 months of age resulted in enhanced weight gain. Mallickbari Union, Bhaluka, a rural area located about 100 km north of Dhaka, Bangladesh. Five hundred and twenty-six infants followed for 6 to 12 months. Of the 526 infants studied, 368 should have received supplementation based on BINP criteria but only 111 infants (30%) did so, while a further 13% were incorrectly given supplementation. So in total over half (52.8%) of the sample was incorrectly identified for supplementation. In addition, less than a quarter of the infants received the full 90 days of supplementation and close to half of the infants exited the programme without the requisite weight gain. Infants were assigned to one of four groups: correctly supplemented, correctly non-supplemented, incorrectly supplemented or incorrectly non-supplemented. This classification provided natural controls; the correctly supplemented infants versus the incorrectly non-supplemented infants, and the correctly non-supplemented infants versus the incorrectly supplemented infants. There were no significant differences in weight gain between the correctly supplemented group and the incorrectly non-supplemented group or between the correctly non-supplemented and the incorrectly supplemented groups, nor was there any evidence of growth faltering in the incorrectly non-supplemented group. This study found serious programmatic deficiencies - inability to identify growth faltering in infants, failure to supplement for the full time period and incorrect exit procedures. There was no evidence that food supplementation had any impact on improving infant weight gain.
Tu, Yu-Kang; Gunnell, David; Gilthorpe, Mark S
2008-01-01
This article discusses three statistical paradoxes that pervade epidemiological research: Simpson's paradox, Lord's paradox, and suppression. These paradoxes have important implications for the interpretation of evidence from observational studies. This article uses hypothetical scenarios to illustrate how the three paradoxes are different manifestations of one phenomenon – the reversal paradox – depending on whether the outcome and explanatory variables are categorical, continuous or a combination of both; this renders the issues and remedies for any one to be similar for all three. Although the three statistical paradoxes occur in different types of variables, they share the same characteristic: the association between two variables can be reversed, diminished, or enhanced when another variable is statistically controlled for. Understanding the concepts and theory behind these paradoxes provides insights into some controversial or contradictory research findings. These paradoxes show that prior knowledge and underlying causal theory play an important role in the statistical modelling of epidemiological data, where incorrect use of statistical models might produce consistent, replicable, yet erroneous results. PMID:18211676
Cadarette, Suzanne M; Dickson, Leigh; Gignac, Monique AM; Beaton, Dorcas E; Jaglal, Susan B; Hawker, Gillian A
2007-01-01
Background The ability to locate those sampled has important implications for response rates and thus the success of survey research. The purpose of this study was to examine predictors of locating women requiring tracing using publicly available methods (primarily Internet searches), and to determine the additional benefit of vital statistics linkages. Methods Random samples of women aged 65–89 years residing in two regions of Ontario, Canada were selected from a list of those who completed a questionnaire between 1995 and 1997 (n = 1,500). A random sample of 507 of these women had been searched on the Internet as part of a feasibility pilot in 2001. All 1,500 women sampled were mailed a newsletter and information letter prior to recruitment by telephone in 2003 and 2004. Those with returned mail or incorrect telephone number(s) required tracing. Predictors of locating women were examined using logistic regression. Results Tracing was required for 372 (25%) of the women sampled, and of these, 181 (49%) were located. Predictors of locating women were: younger age, residing in less densely populated areas, having had a web-search completed in 2001, and listed name identified on the Internet prior to recruitment in 2003. Although vital statistics linkages to death records subsequently identified 41 subjects, these data were incomplete. Conclusion Prospective studies may benefit from using Internet resources at recruitment to determine the listed names for telephone numbers thereby facilitating follow-up tracing and improving response rates. Although vital statistics linkages may help to identify deceased individuals, these may be best suited for post hoc response rate adjustment. PMID:17577404
Terminology extraction from medical texts in Polish
2014-01-01
Background Hospital documents contain free text describing the most important facts relating to patients and their illnesses. These documents are written in specific language containing medical terminology related to hospital treatment. Their automatic processing can help in verifying the consistency of hospital documentation and obtaining statistical data. To perform this task we need information on the phrases we are looking for. At the moment, clinical Polish resources are sparse. The existing terminologies, such as Polish Medical Subject Headings (MeSH), do not provide sufficient coverage for clinical tasks. It would be helpful therefore if it were possible to automatically prepare, on the basis of a data sample, an initial set of terms which, after manual verification, could be used for the purpose of information extraction. Results Using a combination of linguistic and statistical methods for processing over 1200 children hospital discharge records, we obtained a list of single and multiword terms used in hospital discharge documents written in Polish. The phrases are ordered according to their presumed importance in domain texts measured by the frequency of use of a phrase and the variety of its contexts. The evaluation showed that the automatically identified phrases cover about 84% of terms in domain texts. At the top of the ranked list, only 4% out of 400 terms were incorrect while out of the final 200, 20% of expressions were either not domain related or syntactically incorrect. We also observed that 70% of the obtained terms are not included in the Polish MeSH. Conclusions Automatic terminology extraction can give results which are of a quality high enough to be taken as a starting point for building domain related terminological dictionaries or ontologies. This approach can be useful for preparing terminological resources for very specific subdomains for which no relevant terminologies already exist. The evaluation performed showed that none of the tested ranking procedures were able to filter out all improperly constructed noun phrases from the top of the list. Careful choice of noun phrases is crucial to the usefulness of the created terminological resource in applications such as lexicon construction or acquisition of semantic relations from texts. PMID:24976943
Terminology extraction from medical texts in Polish.
Marciniak, Małgorzata; Mykowiecka, Agnieszka
2014-01-01
Hospital documents contain free text describing the most important facts relating to patients and their illnesses. These documents are written in specific language containing medical terminology related to hospital treatment. Their automatic processing can help in verifying the consistency of hospital documentation and obtaining statistical data. To perform this task we need information on the phrases we are looking for. At the moment, clinical Polish resources are sparse. The existing terminologies, such as Polish Medical Subject Headings (MeSH), do not provide sufficient coverage for clinical tasks. It would be helpful therefore if it were possible to automatically prepare, on the basis of a data sample, an initial set of terms which, after manual verification, could be used for the purpose of information extraction. Using a combination of linguistic and statistical methods for processing over 1200 children hospital discharge records, we obtained a list of single and multiword terms used in hospital discharge documents written in Polish. The phrases are ordered according to their presumed importance in domain texts measured by the frequency of use of a phrase and the variety of its contexts. The evaluation showed that the automatically identified phrases cover about 84% of terms in domain texts. At the top of the ranked list, only 4% out of 400 terms were incorrect while out of the final 200, 20% of expressions were either not domain related or syntactically incorrect. We also observed that 70% of the obtained terms are not included in the Polish MeSH. Automatic terminology extraction can give results which are of a quality high enough to be taken as a starting point for building domain related terminological dictionaries or ontologies. This approach can be useful for preparing terminological resources for very specific subdomains for which no relevant terminologies already exist. The evaluation performed showed that none of the tested ranking procedures were able to filter out all improperly constructed noun phrases from the top of the list. Careful choice of noun phrases is crucial to the usefulness of the created terminological resource in applications such as lexicon construction or acquisition of semantic relations from texts.
A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
1997-01-01
Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.
Simpson's paradox in psychological science: a practical guide
Kievit, Rogier A.; Frankenhuis, Willem E.; Waldorp, Lourens J.; Borsboom, Denny
2013-01-01
The direction of an association at the population-level may be reversed within the subgroups comprising that population—a striking observation called Simpson's paradox. When facing this pattern, psychologists often view it as anomalous. Here, we argue that Simpson's paradox is more common than conventionally thought, and typically results in incorrect interpretations—potentially with harmful consequences. We support this claim by reviewing results from cognitive neuroscience, behavior genetics, clinical psychology, personality psychology, educational psychology, intelligence research, and simulation studies. We show that Simpson's paradox is most likely to occur when inferences are drawn across different levels of explanation (e.g., from populations to subgroups, or subgroups to individuals). We propose a set of statistical markers indicative of the paradox, and offer psychometric solutions for dealing with the paradox when encountered—including a toolbox in R for detecting Simpson's paradox. We show that explicit modeling of situations in which the paradox might occur not only prevents incorrect interpretations of data, but also results in a deeper understanding of what data tell us about the world. PMID:23964259
Choosing appropriate independent variable in educational experimental research: some errors debunked
NASA Astrophysics Data System (ADS)
Panjaitan, R. L.
2018-03-01
It is found that a number of quantitative research reports of some beginning researchers, especially undergraduate students, tend to ‘merely’ quantitative with not really proper understanding of variables involved in the research. This paper focuses on some mistakes related to independent variable determination in experimental research in education. With literature research methodology, data were gathered from an undergraduate student’s thesis as a single non-human subject. This data analysis resulted some findings, such as misinterpreted variables that should have represented the research question, and unsuitable calculation of determination coefficient due to incorrect independent variable determination. When a researcher misinterprets data as data that could behave as the independent variable but actually it could not, all of the following data processes become pointless. These problems might lead to inaccurate research conclusion. In this paper, the problems were analysed and discussed. To avert the incorrectness in processing data, it is suggested that undergraduate students as beginning researchers have adequate statistics mastery. This study might function as a resource to researchers in education to be aware to and not to redo similar errors.
Ordering of the O-O stretching vibrational frequencies in ozone
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.; Lee, Timothy J.; Scheiner, Andrew C.; Schaefer, Henry F., III
1989-01-01
The ordering of nu1 and nu3 for O3 is incorrectly predicted by most theoretical methods, including some very high level methods. The first systematic electron correlation method based on one-reference configuration to solve this problem is the coupled cluster single and double excitation method. However, a relatively large basis set, triple zeta plus double polarization is required. Comparison with other theoretical methods is made.
NASA Astrophysics Data System (ADS)
Zhou, Q.; Liu, L.
2017-12-01
Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation method that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics to the best.
Superiority of artificial neural networks for a genetic classification procedure.
Sant'Anna, I C; Tomaz, R S; Silva, G N; Nascimento, M; Bhering, L L; Cruz, C D
2015-08-19
The correct classification of individuals is extremely important for the preservation of genetic variability and for maximization of yield in breeding programs using phenotypic traits and genetic markers. The Fisher and Anderson discriminant functions are commonly used multivariate statistical techniques for these situations, which allow for the allocation of an initially unknown individual to predefined groups. However, for higher levels of similarity, such as those found in backcrossed populations, these methods have proven to be inefficient. Recently, much research has been devoted to developing a new paradigm of computing known as artificial neural networks (ANNs), which can be used to solve many statistical problems, including classification problems. The aim of this study was to evaluate the feasibility of ANNs as an evaluation technique of genetic diversity by comparing their performance with that of traditional methods. The discriminant functions were equally ineffective in discriminating the populations, with error rates of 23-82%, thereby preventing the correct discrimination of individuals between populations. The ANN was effective in classifying populations with low and high differentiation, such as those derived from a genetic design established from backcrosses, even in cases of low differentiation of the data sets. The ANN appears to be a promising technique to solve classification problems, since the number of individuals classified incorrectly by the ANN was always lower than that of the discriminant functions. We envisage the potential relevant application of this improved procedure in the genomic classification of markers to distinguish between breeds and accessions.
Fold assessment for comparative protein structure modeling.
Melo, Francisco; Sali, Andrej
2007-11-01
Accurate and automated assessment of both geometrical errors and incompleteness of comparative protein structure models is necessary for an adequate use of the models. Here, we describe a composite score for discriminating between models with the correct and incorrect fold. To find an accurate composite score, we designed and applied a genetic algorithm method that searched for a most informative subset of 21 input model features as well as their optimized nonlinear transformation into the composite score. The 21 input features included various statistical potential scores, stereochemistry quality descriptors, sequence alignment scores, geometrical descriptors, and measures of protein packing. The optimized composite score was found to depend on (1) a statistical potential z-score for residue accessibilities and distances, (2) model compactness, and (3) percentage sequence identity of the alignment used to build the model. The accuracy of the composite score was compared with the accuracy of assessment by single and combined features as well as by other commonly used assessment methods. The testing set was representative of models produced by automated comparative modeling on a genomic scale. The composite score performed better than any other tested score in terms of the maximum correct classification rate (i.e., 3.3% false positives and 2.5% false negatives) as well as the sensitivity and specificity across the whole range of thresholds. The composite score was implemented in our program MODELLER-8 and was used to assess models in the MODBASE database that contains comparative models for domains in approximately 1.3 million protein sequences.
do Nascimento, Ticiano Gomes; de Jesus Oliveira, Eduardo; Basílio Júnior, Irinaldo Diniz; de Araújo-Júnior, João Xavier; Macêdo, Rui Oliveira
2013-01-25
A limited number of studies with application of the Arrhenius equation have been reported to drugs and biopharmaceuticals in biological fluids at frozen temperatures. This paper describes stability studies of ampicillin and cephalexin in aqueous solution and human plasma applying the Arrhenius law for determination of adequate temperature and time of storage of these drugs using appropriate statistical analysis. Stability studies of the beta-lactams in human plasma were conducted at temperatures of 20°C, 2°C, -20°C and also during four cycles of freeze-thawing. Chromatographic separation was achieved using a Shimpak C(18) column, acetonitrile as organic modifier and detection at 215nm. LC-UV-MS/MS was used to demonstrate the conversion of ampicillin into two diastereomeric forms of ampicilloic acid. Stability studies demonstrated degradation greater than 10% for ampicillin in human plasma at 20°C, 2°C and -20°C after 15h, 2.7days, 11days and for cephalexin at the same temperatures after 14h, 3.4days and 19days, respectively, and after the fourth cycle of freezing-thawing. The Arrhenius plot showed good prediction for the ideal temperature and time of storage for ampicillin (52days) and cephalexin (151days) at a temperature of -40°C, but statistical analysis (least squares method) must be applied to avoid incorrect extrapolations and estimated values out uncertainty limits. Copyright © 2012 Elsevier B.V. All rights reserved.
Rangel, Rafael Henrique; Möller, Leona; Sitter, Helmut; Stibane, Tina; Strzelczyk, Adam
2017-11-01
Multiple-choice questions (MCQs) provide useful information about correct and incorrect answers, but they do not offer information about students' confidence. Ninety and another 81 medical students participated each in a curricular neurology multiple-choice exam and indicated their confidence for every single MCQ. Each MCQ had a defined level of potential clinical impact on patient safety (uncritical, risky, harmful). Our first objective was to detect informed (IF), guessed (GU), misinformed (MI), and uninformed (UI) answers. Further, we evaluated whether there were significant differences for confidence at correct and incorrect answers. Then, we explored if clinical impact had a significant influence on students' confidence. There were 1818 IF, 635 GU, 71 MI, and 176 UI answers in exam I and 1453 IF, 613 GU, 92 MI, and 191 UI answers in exam II. Students' confidence was significantly higher for correct than for incorrect answers at both exams (p < 0.001). For exam I, students' confidence was significantly higher for incorrect harmful than for incorrect risky classified MCQs (p = 0.01). At exam II, students' confidence was significantly higher for incorrect harmful than for incorrect benign (p < 0.01) and significantly higher for correct benign than for correct harmful categorized MCQs (p = 0.01). We were pleased to see that there were more informed than guessed, more uninformed than misinformed answers and higher students' confidence for correct than for incorrect answers. Our expectation that students state higher confidence in correct and harmful and lower confidence in incorrect and harmful MCQs could not be confirmed.
Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.
Houston, Lauren; Probst, Yasmine; Humphries, Allison
2015-01-01
Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.
Liu, Zechang; Wang, Liping; Liu, Yumei
2018-01-18
Hops impart flavor to beer, with the volatile components characterizing the various hop varieties and qualities. Fingerprinting, especially flavor fingerprinting, is often used to identify 'flavor products' because inconsistencies in the description of flavor may lead to an incorrect definition of beer quality. Compared to flavor fingerprinting, volatile fingerprinting is simpler and easier. We performed volatile fingerprinting using head space-solid phase micro-extraction gas chromatography-mass spectrometry combined with similarity analysis and principal component analysis (PCA) for evaluating and distinguishing between three major Chinese hops. Eighty-four volatiles were identified, which were classified into seven categories. Volatile fingerprinting based on similarity analysis did not yield any obvious result. By contrast, hop varieties and qualities were identified using volatile fingerprinting based on PCA. The potential variables explained the variance in the three hop varieties. In addition, the dendrogram and principal component score plot described the differences and classifications of hops. Volatile fingerprinting plus multivariate statistical analysis can rapidly differentiate between the different varieties and qualities of the three major Chinese hops. Furthermore, this method can be used as a reference in other fields. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
Trends in added sugar supply and consumption in Australia: there is an Australian Paradox.
Barclay, Alan W; Brand-Miller, Jennie C
2013-09-30
In 2011, Barclay and Brand-Miller reported the observation that trends in refined sugar consumption in Australia were the inverse of trends in overweight and obesity (The Australian Paradox). Rikkers et al. claim that the Australian Paradox is based on incomplete data because the sources utilised did not incorporate estimates for imported processed foods. This assertion is incorrect. Indeed, national nutrition surveys, sugar consumption data from the United Nations Food and Agricultural Organisation, the Australian Bureau of Statistics and Australian beverage industry data all incorporated data on imported products.
NASA Technical Reports Server (NTRS)
Frehlich, Rod; Kavaya, Michael J.
2000-01-01
The explanation for the difference between simulation and the zero-order theory for heterodyne lidar returns in a turbulent atmosphere proposed by Belmonte and Rye is incorrect. The theoretical expansion is not developed under a square- law-structure function approximation (random wedge atmosphere). Agreement between the simulations and the zero-order term of the theoretical expansion is produced for the limit of statistically independent paths (bi-static operation with large transmitter-receiver separation) when the simulations correctly include the large-scale gradients of the turbulent atmosphere.
Adolescents: Contraceptive Knowledge and Use, a Brazilian Study
Correia, Divanise S.; Pontes, Ana C. P.; Cavalcante, Jairo C.; Egito, E. Sócrates T.; Maia, Eulália M.C.
2009-01-01
The purpose of this study was to identify the knowledge and use of contraceptive methods by female adolescent students. The study was cross-sectional and quantitative, using a semi-structured questionnaire that was administered to 12- to 19-year-old female students in Maceió, Brazil. A representative and randomized sample was calculated, taking into account the number of hospital admissions for curettage. This study was approved by the Human Research Ethics Committee, and Epi InfoTM software was used for data and result evaluation using the mean and chi-square statistical test. Our results show that the majority of students know of some contraceptive methods (95.5%), with the barrier/hormonal methods being the most mentioned (72.4%). Abortion and aborting drugs were inaccurately described as contraceptives, and 37.9% of the sexually active girls did not make use of any method. The barrier methods were the most used (35.85%). A significant association was found in the total sample (2,592) between pregnancy and the use of any contraceptive method. This association was not found, however, in the group having an active sexual life (559). The study points to a knowledge of contraceptive methods, especially by teenagers who have already been pregnant, but contraceptives were not adequately used. The low use of chemical methods of contraception brings the risk of pregnancy. Since abortion and aborting drugs were incorrectly cited as contraceptive methods, this implies a nonpreventive attitude towards pregnancy. PMID:19151897
Babatunde, Oluwole Adeyemi; Ibirongbe, Demilade Olusola; Omede, Owen; Babatunde, Olubukola Oluwakemi; Durowade, Kabir Adekunle; Salaudeen, Adekunle Ganiyu; Akande, Tanimola Makanjuola
2016-01-01
Unintended pregnancy and unsafe abortion pose a major reproductive health challenge to adolescents. Emergency contraception is safe and effective in preventing unplanned pregnancy. The objective of this study was to assess the student's knowledge and use of emergency contraception. This cross-sectional study was carried out in Ilorin, Nigeria, using multi-stage sampling method. Data was collected using pre-tested semi-structured self-administered questionnaire. Knowledge was scored and analysed. SPSS version 21.0 was used for data analysis. A p-value <0.05 was considered statistically significant. 27.8% of the respondents had good knowledge of emergency contraception. Majority of respondents (87.2%) had never used emergency contraception. Majority of those who had ever used emergency contraception (85.7%) used it incorrectly, using it more than 72 hours after sexual intercourse (p=0.928). Knowledge about Emergency contraception and prevalence of use were low. Contraceptive education should be introduced early in the school curriculum for adolescents.
Bourne, Roger; Himmelreich, Uwe; Sharma, Ansuiya; Mountford, Carolyn; Sorrell, Tania
2001-01-01
A new fingerprinting technique with the potential for rapid identification of bacteria was developed by combining proton magnetic resonance spectroscopy (1H MRS) with multivariate statistical analysis. This resulted in an objective identification strategy for common clinical isolates belonging to the bacterial species Staphylococcus aureus, Staphylococcus epidermidis, Enterococcus faecalis, Streptococcus pneumoniae, Streptococcus pyogenes, Streptococcus agalactiae, and the Streptococcus milleri group. Duplicate cultures of 104 different isolates were examined one or more times using 1H MRS. A total of 312 cultures were examined. An optimized classifier was developed using a bootstrapping process and a seven-group linear discriminant analysis to provide objective classification of the spectra. Identification of isolates was based on consistent high-probability classification of spectra from duplicate cultures and achieved 92% agreement with conventional methods of identification. Fewer than 1% of isolates were identified incorrectly. Identification of the remaining 7% of isolates was defined as indeterminate. PMID:11474013
Durning, Steven J; Graner, John; Artino, Anthony R; Pangaro, Louis N; Beckman, Thomas; Holmboe, Eric; Oakes, Terrance; Roy, Michael; Riedy, Gerard; Capaldi, Vincent; Walter, Robert; van der Vleuten, Cees; Schuwirth, Lambert
2012-09-01
Clinical reasoning is essential to medical practice, but because it entails internal mental processes, it is difficult to assess. Functional magnetic resonance imaging (fMRI) and think-aloud protocols may improve understanding of clinical reasoning as these methods can more directly assess these processes. The objective of our study was to use a combination of fMRI and think-aloud procedures to examine fMRI correlates of a leading theoretical model in clinical reasoning based on experimental findings to date: analytic (i.e., actively comparing and contrasting diagnostic entities) and nonanalytic (i.e., pattern recognition) reasoning. We hypothesized that there would be functional neuroimaging differences between analytic and nonanalytic reasoning theory. 17 board-certified experts in internal medicine answered and reflected on validated U.S. Medical Licensing Exam and American Board of Internal Medicine multiple-choice questions (easy and difficult) during an fMRI scan. This procedure was followed by completion of a formal think-aloud procedure. fMRI findings provide some support for the presence of analytic and nonanalytic reasoning systems. Statistically significant activation of prefrontal cortex distinguished answering incorrectly versus correctly (p < 0.01), whereas activation of precuneus and midtemporal gyrus distinguished not guessing from guessing (p < 0.01). We found limited fMRI evidence to support analytic and nonanalytic reasoning theory, as our results indicate functional differences with correct vs. incorrect answers and guessing vs. not guessing. However, our findings did not suggest one consistent fMRI activation pattern of internal medicine expertise. This model of employing fMRI correlates offers opportunities to enhance our understanding of theory, as well as improve our teaching and assessment of clinical reasoning, a key outcome of medical education.
Milnthorpe, Andrew T; Soloviev, Mikhail
2011-04-15
The Cancer Genome Anatomy Project (CGAP) xProfiler and cDNA Digital Gene Expression Displayer (DGED) have been made available to the scientific community over a decade ago and since then were used widely to find genes which are differentially expressed between cancer and normal tissues. The tissue types are usually chosen according to the ontology hierarchy developed by NCBI. The xProfiler uses an internally available flat file database to determine the presence or absence of genes in the chosen libraries, while cDNA DGED uses the publicly available UniGene Expression and Gene relational databases to count the sequences found for each gene in the presented libraries. We discovered that the CGAP approach often includes libraries from dependent or irrelevant tissues (one third of libraries were incorrect on average, with some tissue searches no correct libraries being selected at all). We also discovered that the CGAP approach reported genes from outside the selected libraries and may omit genes found within the libraries. Other errors include the incorrect estimation of the significance values and inaccurate settings for the library size cut-off values. We advocated a revised approach to finding libraries associated with tissues. In doing so, libraries from dependent or irrelevant tissues do not get included in the final library pool. We also revised the method for determining the presence or absence of a gene by searching the UniGene relational database, revised calculation of statistical significance and sorted the library cut-off filter. Our results justify re-evaluation of all previously reported results where NCBI CGAP expression data and tools were used.
2011-01-01
Background The Cancer Genome Anatomy Project (CGAP) xProfiler and cDNA Digital Gene Expression Displayer (DGED) have been made available to the scientific community over a decade ago and since then were used widely to find genes which are differentially expressed between cancer and normal tissues. The tissue types are usually chosen according to the ontology hierarchy developed by NCBI. The xProfiler uses an internally available flat file database to determine the presence or absence of genes in the chosen libraries, while cDNA DGED uses the publicly available UniGene Expression and Gene relational databases to count the sequences found for each gene in the presented libraries. Results We discovered that the CGAP approach often includes libraries from dependent or irrelevant tissues (one third of libraries were incorrect on average, with some tissue searches no correct libraries being selected at all). We also discovered that the CGAP approach reported genes from outside the selected libraries and may omit genes found within the libraries. Other errors include the incorrect estimation of the significance values and inaccurate settings for the library size cut-off values. We advocated a revised approach to finding libraries associated with tissues. In doing so, libraries from dependent or irrelevant tissues do not get included in the final library pool. We also revised the method for determining the presence or absence of a gene by searching the UniGene relational database, revised calculation of statistical significance and sorted the library cut-off filter. Conclusion Our results justify re-evaluation of all previously reported results where NCBI CGAP expression data and tools were used. PMID:21496233
Editorial: Bayesian benefits for child psychology and psychiatry researchers.
Oldehinkel, Albertine J
2016-09-01
For many scientists, performing statistical tests has become an almost automated routine. However, p-values are frequently used and interpreted incorrectly; and even when used appropriately, p-values tend to provide answers that do not match researchers' questions and hypotheses well. Bayesian statistics present an elegant and often more suitable alternative. The Bayesian approach has rarely been applied in child psychology and psychiatry research so far, but the development of user-friendly software packages and tutorials has placed it well within reach now. Because Bayesian analyses require a more refined definition of hypothesized probabilities of possible outcomes than the classical approach, going Bayesian may offer the additional benefit of sparkling the development and refinement of theoretical models in our field. © 2016 Association for Child and Adolescent Mental Health.
ERIC Educational Resources Information Center
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.
2009-01-01
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
A New Look at an Old Work Problem
ERIC Educational Resources Information Center
Waits, Bert K.; Silver, Jerry L.
1973-01-01
Two approaches are discussed for calculating the work necessary to pump water from a conical or parabolic container. The direct method derived from the definition of work is easy to misuse, as illustrated by a student's incorrect solution. (JP)
42 CFR 489.40 - Definition of incorrect collection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 5 2011-10-01 2011-10-01 false Definition of incorrect collection. 489.40 Section 489.40 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION PROVIDER AGREEMENTS AND SUPPLIER APPROVAL Handling of Incorrect...
42 CFR 489.40 - Definition of incorrect collection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Definition of incorrect collection. 489.40 Section 489.40 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION PROVIDER AGREEMENTS AND SUPPLIER APPROVAL Handling of Incorrect...
Donald Gagliasso; Susan Hummel; Hailemariam Temesgen
2014-01-01
Various methods have been used to estimate the amount of above ground forest biomass across landscapes and to create biomass maps for specific stands or pixels across ownership or project areas. Without an accurate estimation method, land managers might end up with incorrect biomass estimate maps, which could lead them to make poorer decisions in their future...
Investigation of Missing Responses in Implementation of Cognitive Diagnostic Models
ERIC Educational Resources Information Center
Dai, Shenghai
2017-01-01
This dissertation is aimed at investigating the impact of missing data and evaluating the performance of five selected methods for handling missing responses in the implementation of Cognitive Diagnostic Models (CDMs). The five methods are: a) treating missing data as incorrect (IN), b) person mean imputation (PM), c) two-way imputation (TW), d)…
Retreatment Predictions in Odontology by means of CBR Systems.
Campo, Livia; Aliaga, Ignacio J; De Paz, Juan F; García, Alvaro Enrique; Bajo, Javier; Villarubia, Gabriel; Corchado, Juan M
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising.
Retreatment Predictions in Odontology by means of CBR Systems
Campo, Livia; Aliaga, Ignacio J.; García, Alvaro Enrique; Villarubia, Gabriel; Corchado, Juan M.
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising. PMID:26884749
78 FR 33204 - Airworthiness Directives; Bell Helicopter Textron, Inc. (Bell) Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-04
... manufactured seal material is installed on the bearing. This AD is prompted by a report that certain bearings were manufactured with an incorrect seal material that does not meet Bell specifications. The actions... June 2012 were manufactured with incorrect seal material. The incorrect seal material does not meet...
26 CFR 31.3406-0 - Outline of the backup withholding regulations.
Code of Federal Regulations, 2013 CFR
2013-04-01
... incorrect name/TIN combination. (2) Definition of account. (3) Definition of business day. (4) Certain exceptions. (c) Notice regarding an incorrect name/TIN combination. (1) In general. (2) Additional... of receipt. (d) Notice from payors of backup withholding due to an incorrect name/TIN combination. (1...
78 FR 47527 - Airworthiness Directives; Dassault Aviation Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... and correct an incorrect angle signal causing an un-commanded nose wheel deflection, which could... incorrect angle signal resulting in un-commanded nose wheel deflection which could not be countered by the... adoption of this rule because an incorrect angle signal causing an un-commanded nose wheel deflection could...
Siregar, S; Pouw, M E; Moons, K G M; Versteegh, M I M; Bots, M L; van der Graaf, Y; Kalkman, C J; van Herwerden, L A; Groenwold, R H H
2014-01-01
Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted from The Netherlands Association for Cardio-Thoracic Surgery database and the Hospital Discharge Registry. The number of cardiac surgery interventions was compared between both databases. The European System for Cardiac Operative Risk Evaluation and hospital standardised mortality ratio models were updated in the study population and compared using the C-statistic, calibration plots and the Brier-score. Results The number of cardiac surgery interventions performed could not be assessed using the administrative database as the intervention code was incorrect in 1.4–26.3%, depending on the type of intervention. In 7.3% no intervention code was registered. The updated administrative model was inferior to the updated clinical model with respect to discrimination (c-statistic of 0.77 vs 0.85, p<0.001) and calibration (Brier Score of 2.8% vs 2.6%, p<0.001, maximum score 3.0%). Two average performing hospitals according to the clinical model became outliers when benchmarking was performed using the administrative model. Conclusions In cardiac surgery, administrative data are less suitable than clinical data for the purpose of benchmarking. The use of either administrative or clinical risk-adjustment models can affect the outlier status of hospitals. Risk-adjustment models including procedure-specific clinical risk factors are recommended. PMID:24334377
Bansal, Ravi; Peterson, Bradley S
2018-06-01
Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.
Bakker, E W M; Visser, K; van der Wal, A; Kuiper, M A; Koopmans, M; Breedveld, R
2012-09-01
The primary goal of this observational clinical study was to register the occurrence of incorrect inflation and deflation timing of an intra-aortic balloon pump in autoPilot mode. The secondary goal was to identify possible causes of incorrect timing. During IABP assistance of 60 patients, every four hours a strip was printed with the IABP frequency set to 1:2. Strips were examined for timing discrepancies beyond 40 ms from the dicrotic notch (inflation) and the end of the diastolic phase (deflation). In this way, 320 printed strips were examined. A total of 52 strips (16%) showed incorrect timing. On 24 of these strips, the incorrect timing was called incidental, as it showed on only one or a few beats. The other 28 cases of erroneous timing were called consistent, as more than 50% of the beats on the strip showed incorrect timing. We observed arrhythmia in 69% of all cases of incorrect timing. When timing was correct, arrhythmia was found on 13 (5%) of 268 strips. A poor quality electrocardiograph (ECG) signal showed on 37% of all strips with incorrect timing and 11% of all strips with proper timing. We conclude that inflation and deflation timing of the IABP is not always correct when using the autoPilot mode. The quality of the ECG input signal and the occurrence of arrhythmia appear to be related to erroneous timing. Switching from autoPilot mode to operator mode may not always prevent incorrect timing.
McTavish, Emily Jane; Steel, Mike; Holder, Mark T
2015-12-01
Statistically consistent estimation of phylogenetic trees or gene trees is possible if pairwise sequence dissimilarities can be converted to a set of distances that are proportional to the true evolutionary distances. Susko et al. (2004) reported some strikingly broad results about the forms of inconsistency in tree estimation that can arise if corrected distances are not proportional to the true distances. They showed that if the corrected distance is a concave function of the true distance, then inconsistency due to long branch attraction will occur. If these functions are convex, then two "long branch repulsion" trees will be preferred over the true tree - though these two incorrect trees are expected to be tied as the preferred true. Here we extend their results, and demonstrate the existence of a tree shape (which we refer to as a "twisted Farris-zone" tree) for which a single incorrect tree topology will be guaranteed to be preferred if the corrected distance function is convex. We also report that the standard practice of treating gaps in sequence alignments as missing data is sufficient to produce non-linear corrected distance functions if the substitution process is not independent of the insertion/deletion process. Taken together, these results imply inconsistent tree inference under mild conditions. For example, if some positions in a sequence are constrained to be free of substitutions and insertion/deletion events while the remaining sites evolve with independent substitutions and insertion/deletion events, then the distances obtained by treating gaps as missing data can support an incorrect tree topology even given an unlimited amount of data. Copyright © 2015 Elsevier Inc. All rights reserved.
Support for Struggling Students in Algebra: Contributions of Incorrect Worked Examples
ERIC Educational Resources Information Center
Barbieri, Christina; Booth, Julie L.
2016-01-01
Middle school algebra students (N = 125) randomly assigned within classroom to a Problem-solving control group, a Correct worked examples control group, or an Incorrect worked examples group, completed an experimental classroom study to assess the differential effects of incorrect examples versus the two control groups on students' algebra…
78 FR 56150 - Airworthiness Directives; Piper Aircraft, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
..., PA-46-350P, PA-46R-350T, and PA-46-500TP airplanes. There is an incorrect reference to a paragraph designation, four instances of an incorrect reference to the paragraph in the service bulletin that references... instances of an incorrect reference to the paragraph in the service bulletin that references an...
Accurate protein structure modeling using sparse NMR data and homologous structure information.
Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David
2012-06-19
While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.
Sexual dimorphism in teeth? Clinical relevance.
Radlanski, Ralf J; Renz, Herbert; Hopfenmüller, Werner
2012-04-01
Many morphometric studies show a sexual dimorphism in human teeth. We wanted to know whether it is possible to determine the sex of an individual if only the anterior teeth are visible. Fifty intraoral photographs showing the front tooth region of female and male individuals (age: from 7 to 75 years) were randomly arranged in actual size on a questionnaire. The lip region was covered in each case. Besides "female" and "male", one was also able to check "?" if undecided. The questionnaires were distributed to 50 expert test persons (dentists, dental technicians, dental assistants, and students of dental medicine) and to 50 laymen and were all returned for evaluation. Although the correct sex was recognized on single photographs to a maximum of 76%, it was incorrect in 69% on other photographs. Altogether, the statistical evaluation showed that in most cases, the sex was only recognized correctly by one half, and incorrect by the other half. It can be concluded that a sexual dimorphism of human teeth-although measurable morphometrically-could not be recognized visually on the basis of photographs of the front tooth region. Neither experts in the field of dentistry nor laymen were able to properly distinguish between male and female teeth.
Exploring the Relationship Between Eye Movements and Electrocardiogram Interpretation Accuracy
NASA Astrophysics Data System (ADS)
Davies, Alan; Brown, Gavin; Vigo, Markel; Harper, Simon; Horseman, Laura; Splendiani, Bruno; Hill, Elspeth; Jay, Caroline
2016-12-01
Interpretation of electrocardiograms (ECGs) is a complex task involving visual inspection. This paper aims to improve understanding of how practitioners perceive ECGs, and determine whether visual behaviour can indicate differences in interpretation accuracy. A group of healthcare practitioners (n = 31) who interpret ECGs as part of their clinical role were shown 11 commonly encountered ECGs on a computer screen. The participants’ eye movement data were recorded as they viewed the ECGs and attempted interpretation. The Jensen-Shannon distance was computed for the distance between two Markov chains, constructed from the transition matrices (visual shifts from and to ECG leads) of the correct and incorrect interpretation groups for each ECG. A permutation test was then used to compare this distance against 10,000 randomly shuffled groups made up of the same participants. The results demonstrated a statistically significant (α 0.05) result in 5 of the 11 stimuli demonstrating that the gaze shift between the ECG leads is different between the groups making correct and incorrect interpretations and therefore a factor in interpretation accuracy. The results shed further light on the relationship between visual behaviour and ECG interpretation accuracy, providing information that can be used to improve both human and automated interpretation approaches.
Gal-Nadasan, Norbert; Gal-Nadasan, Emanuela Georgiana; Stoicu-Tivadar, Vasile; Poenaru, Dan V; Popa-Andrei, Diana
2017-01-01
This paper suggests the usage of the Microsoft Kinect to detect the onset of the scoliosis at high school students due to incorrect sitting positions. The measurement is done by measuring the overall posture in orthostatic position using the Microsoft Kinect. During the measuring process several key points of the human body are tracked like the hips and shoulders to form the postural data. The test was done on 30 high school students who spend 6 to 7 hours per day in the school benches. The postural data is statistically processed by IBM Watson's Analytics. From the statistical analysis we have obtained that a prolonged sitting position at such young ages affects in a negative way the spinal cord and facilitates the appearance of malicious postures like scoliosis and lordosis.
Vranken, Marjolein J M; Mantel-Teeuwisse, Aukje K; Jünger, Saskia; Radbruch, Lukas; Lisman, John; Scholten, Willem; Payne, Sheila; Lynch, Tom; Schutjens, Marie-Hélène D B
2014-12-01
Overregulation of controlled medicines is one of the factors contributing to limited access to opioid medicines. The purpose of this study was to identify legal barriers to access to opioid medicines in 12 Eastern European countries participating in the Access to Opioid Medication in Europa project, using a quick scan method. A quick scan method to identify legal barriers was developed focusing on eight different categories of barriers. Key experts in 12 European countries were requested to send relevant legislation. Legislation was quick scanned using World Health Organization guidelines. Overly restrictive provisions and provisions that contain stigmatizing language and incorrect definitions were identified. The selected provisions were scored into two categories: 1) barrier and 2) uncertain, and reviewed by two authors. A barrier was recorded if both authors agreed the selected provision to be a barrier (Category 1). National legislation was obtained from 11 of 12 countries. All 11 countries showed legal barriers in the areas of prescribing (most frequently observed barrier). Ten countries showed barriers in the areas of dispensing and showed stigmatizing language and incorrect use of definitions in their legislation. Most barriers were identified in the legislation of Bulgaria, Greece, Lithuania, Serbia, and Slovenia. The Cypriot legislation showed the fewest total number of barriers. The selected countries have in common as main barriers prescribing and dispensing restrictions, the use of stigmatizing language, and incorrect use of definitions. The practical impact of these barriers identified using a quick scan method needs to be validated by other means. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Assembly of greek marble inscriptions by isotopic methods.
Herz, N; Wenner, D B
1978-03-10
Classical Greek inscriptions cut in marble, whose association as original stelai by archeological methods was debatable, were selected for study. Using traditional geological techniques and determinations of the per mil increments in carbon-13 and oxygen-18, it was determined that fragments could be positively assigned to three stelai, but that fragments from three other stelai had been incorrectly associated.
A Method for Imputing Response Options for Missing Data on Multiple-Choice Assessments
ERIC Educational Resources Information Center
Wolkowitz, Amanda A.; Skorupski, William P.
2013-01-01
When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…
A Hybrid Approach to Data Assimilation for Reconstructing the Evolution of Mantle Dynamics
NASA Astrophysics Data System (ADS)
Zhou, Quan; Liu, Lijun
2017-11-01
Quantifying past mantle dynamic processes represents a major challenge in understanding the temporal evolution of the solid earth. Mantle convection modeling with data assimilation is one of the most powerful tools to investigate the dynamics of plate subduction and mantle convection. Although various data assimilation methods, both forward and inverse, have been created, these methods all have limitations in their capabilities to represent the real earth. Pure forward models tend to miss important mantle structures due to the incorrect initial condition and thus may lead to incorrect mantle evolution. In contrast, pure tomography-based models cannot effectively resolve the fine slab structure and would fail to predict important subduction-zone dynamic processes. Here we propose a hybrid data assimilation approach that combines the unique power of the sequential and adjoint algorithms, which can properly capture the detailed evolution of the downgoing slab and the tomographically constrained mantle structures, respectively. We apply this new method to reconstructing mantle dynamics below the western U.S. while considering large lateral viscosity variations. By comparing this result with those from several existing data assimilation methods, we demonstrate that the hybrid modeling approach recovers the realistic 4-D mantle dynamics the best.
NASA Astrophysics Data System (ADS)
Heranudin; Bakhri, S.
2018-02-01
A linear accelerator (linac) is widely used as a means of radiotherapy by focusing high-energy photons in the targeted tumor of patient. Incorrectness of the shooting can lead normal tissue surrounding the tumor received unnecessary radiation and become damaged cells. A method is required to minimize the incorrectness that mostly caused by movement of the patient during radiotherapy process. In this paper, the Wireless Identification and Sensing Platform (WISP) architecture was employed to monitor in real time the movement of the patient’s body during radiotherapy process. In general, the WISP is a wearable sensors device that can transmit measurement data wirelessly. In this design, the measurement devices consist of an accelerometer, a barometer and an ionizing radiation sensor. If any changes in the body position which resulted in incorrectness of the shooting, the accelerometer and the barometer will trigger a warning to the linac operator. In addition, the radiation sensor in the WISP will detect unwanted radiation and that can endanger the patient. A wireless feature in this device can ease in implementation. Initial analyses have been performed and showed that the WISP is feasible to be applied on external beam radiotherapy.
A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.
Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa
2016-05-17
Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.
A statistical approach for inferring the 3D structure of the genome.
Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe
2014-06-15
Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.
Chatrchyan, Serguei
2015-05-19
Table 4 was incorrectly captioned in the originally published version. The correct caption is ‘Normalised differential tt - production cross section as a function of the number of additional jets with p T > 30 GeV in the lepton+jets channel. Furthermore, the statistical, systematic, and total uncertainties are also shown. Finally, the main experimental and model systematic uncertainties are displayed: JES and the combination of renormalisation and factorisation scales, jet-parton matching threshold, and hadronisation (in the table “Q 2/Match./Had.”)’.
ERRATUM: 'MAPPING THE GAS TURBULENCE IN THE COMA CLUSTER: PREDICTIONS FOR ASTRO-H'
NASA Technical Reports Server (NTRS)
Zuhone, J. A.; Markevitch, M.; Zhuravleva, I.
2016-01-01
The published version of this paper contained an error in Figure 5. This figure is intended to show the effect on the structure function of subtracting the bias induced by the statistical and systematic errors on the line shift. The filled circles show the bias-subtracted structure function. The positions of these points in the left panel of the original figure were calculated incorrectly. The figure is reproduced below (with the original caption) with the correct values for the bias-subtracted structure function. No other computations or figures in the original manuscript are affected.
Trends in added sugar supply and consumption in Australia: there is an Australian Paradox
2013-01-01
In 2011, Barclay and Brand-Miller reported the observation that trends in refined sugar consumption in Australia were the inverse of trends in overweight and obesity (The Australian Paradox). Rikkers et al. claim that the Australian Paradox is based on incomplete data because the sources utilised did not incorporate estimates for imported processed foods. This assertion is incorrect. Indeed, national nutrition surveys, sugar consumption data from the United Nations Food and Agricultural Organisation, the Australian Bureau of Statistics and Australian beverage industry data all incorporated data on imported products. PMID:24079329
Technetium phosphate bone scan in the diagnosis of septic arthritis in childhood
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundberg, S.B.; Savage, J.P.; Foster, B.K.
1989-09-01
The technetium phosphate bone scans of 106 children with suspected septic arthritis were reviewed to determine whether the bone scan can accurately differentiate septic from nonseptic arthropathy. Only 13% of children with proved septic arthritis had correct blind scan interpretation. The clinically adjusted interpretation did not identify septic arthritis in 30%. Septic arthritis was incorrectly identified in 32% of children with no evidence of septic arthritis. No statistically significant differences were noted between the scan findings in the septic and nonseptic groups and no scan findings correlated specifically with the presence or absence of joint sepsis.
The Impact of Incorrect Examples on Learning Fractions: A Field Experiment with 6th Grade Students
ERIC Educational Resources Information Center
Heemsoth, Tim; Heinze, Aiso
2014-01-01
Educational research indicates that error reflection, especially reflection on incorrect examples, has a positive effect on knowledge acquisition. The benefit of error reflections might be explained by the extended knowledge of incorrect strategies and concepts (negative knowledge) which fosters the learning of new content. In a field experiment…
Araújo, Maria Suely Peixoto de; Costa, Laura Olinda Bregieiro Fernandes
2009-03-01
This study focused on knowledge and use of emergency contraception among 4,210 adolescents (14-19 years) enrolled in public schools in Pernambuco State, Brazil. Information was collected using the Global School-Based Student Health Survey, previously validated. Knowledge, frequency, and form of use of emergency contraception were investigated. Independent variables were classified as socio-demographic and those related to sexual behavior. Most adolescents reported knowing and having received information about the method, but among those who had already used it, only 22.1% had done so correctly. Adjusted regression analysis showed greater likelihood of knowledge about the method among girls (OR = 5.03; 95%CI: 1.72-14.69) and the sexually initiated (OR = 1.52; 95%CI: 1.34-1.75), while rural residents were 68% less knowledgeable. Rural residents showed 1.68 times higher odds (CI95%: 1.09-2.25) of incorrect use, while girls showed 71% lower likelihood of incorrect use. Sexual and reproductive education is necessary, especially among male and rural adolescents.
AlSabaani, Nasser A.; Behrens, Ashley; Jastanieah, Sabah; Al Malki, Salem; Al Jindan, Mohanna; Al Motowa, Saeed
2016-01-01
PURPOSE: The purpose of this study is to evaluate the causes of phakic implantable collamer lens (ICL) explantation/exchange at an eye hospital in Saudi Arabia. MATERIALS AND METHODS: A retrospective chart review was performed for patients who underwent ICL implantation from 2007 to March 2014 and data were collected on cases that underwent ICL explantation. RESULTS: Of the 787 ICL implants, 30 implants (3.8% [95% confidence interval 2.6%; 5.3%]) were explanted. The causes of explantation included incorrect lens size (22), cataract (4), high residual astigmatism (2), rhegmatogenous retinal detachment (1), and intolerable glare (1). Corrective measures mainly included an exchange with an appropriately sized lens (9), ICL explantation (11), with phacoemulsification and posterior chamber intraocular lens implantation (6), or replacement with an ICL of correct power (2). CONCLUSION: Incorrect ICL size was the most common cause of ICL explantation. More accurate sizing methods for ICL are required to reduce the explantation/exchange rate. PMID:27994391
Grasso, Michael A; Schwarcz, Sandra; Galbraith, Jennifer S; Musyoki, Helgar; Kambona, Caroline; Kellogg, Timothy A
2016-02-01
Condom use continues to be an important primary prevention tool to reduce the acquisition and transmission of HIV and other sexually transmitted infections. However, incorrect use of condoms can reduce their effectiveness. Using data from a 2012 nationally representative cross-sectional household survey conducted in Kenya, we analyzed a subpopulation of sexually active adults and estimated the percent that used condoms incorrectly during sex, and the type of condom errors. We used multivariable logistic regression to determine variables to be independently associated with incorrect condom use. Among 13,720 adolescents and adults, 8014 were sexually active in the previous 3 months (60.3%; 95% confidence interval [CI], 59.0-61.7). Among those who used a condom with a sex partner, 20% (95% CI, 17.4-22.6) experienced at least one instance of incorrect condom use in the previous 3 months. Of incorrect condom users, condom breakage or leakage was the most common error (52%; 95% CI, 44.5-59.6). Factors found to be associated with incorrect condom use were multiple sexual partnerships in the past 12 months (2 partners: adjusted odds ratio [aOR], 1.5; 95% CI, 1.0-2.0; P = 0.03; ≥3: aOR, 2.3; 95% CI, 1.5-3.5; P < 0.01) and reporting symptoms of a sexually transmitted infection (aOR, 2.8; 95% CI, 1.8-4.3; P < 0.01). Incorrect condom use is frequent among sexually active Kenyans and this may translate into substantial HIV transmission. Further understanding of the dynamics of condom use and misuse, in the broader context of other prevention strategies, will aid program planners in the delivery of appropriate interventions aimed at limiting such errors.
Multivariate Phylogenetic Comparative Methods: Evaluations, Comparisons, and Recommendations.
Adams, Dean C; Collyer, Michael L
2018-01-01
Recent years have seen increased interest in phylogenetic comparative analyses of multivariate data sets, but to date the varied proposed approaches have not been extensively examined. Here we review the mathematical properties required of any multivariate method, and specifically evaluate existing multivariate phylogenetic comparative methods in this context. Phylogenetic comparative methods based on the full multivariate likelihood are robust to levels of covariation among trait dimensions and are insensitive to the orientation of the data set, but display increasing model misspecification as the number of trait dimensions increases. This is because the expected evolutionary covariance matrix (V) used in the likelihood calculations becomes more ill-conditioned as trait dimensionality increases, and as evolutionary models become more complex. Thus, these approaches are only appropriate for data sets with few traits and many species. Methods that summarize patterns across trait dimensions treated separately (e.g., SURFACE) incorrectly assume independence among trait dimensions, resulting in nearly a 100% model misspecification rate. Methods using pairwise composite likelihood are highly sensitive to levels of trait covariation, the orientation of the data set, and the number of trait dimensions. The consequences of these debilitating deficiencies are that a user can arrive at differing statistical conclusions, and therefore biological inferences, simply from a dataspace rotation, like principal component analysis. By contrast, algebraic generalizations of the standard phylogenetic comparative toolkit that use the trace of covariance matrices are insensitive to levels of trait covariation, the number of trait dimensions, and the orientation of the data set. Further, when appropriate permutation tests are used, these approaches display acceptable Type I error and statistical power. We conclude that methods summarizing information across trait dimensions, as well as pairwise composite likelihood methods should be avoided, whereas algebraic generalizations of the phylogenetic comparative toolkit provide a useful means of assessing macroevolutionary patterns in multivariate data. Finally, we discuss areas in which multivariate phylogenetic comparative methods are still in need of future development; namely highly multivariate Ornstein-Uhlenbeck models and approaches for multivariate evolutionary model comparisons. © The Author(s) 2017. Published by Oxford University Press on behalf of the Systematic Biology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Thompson, Bruce
Researchers too frequently consider the reliability of the scores they analyze, and this may lead to incorrect conclusions. Practice in this regard may be negatively influenced by telegraphic habits of speech implying that tests possess reliability and other measurement characteristics. Styles of speaking in journal articles, in textbooks, and in…
Circuits Protect Against Incorrect Power Connections
NASA Technical Reports Server (NTRS)
Delombard, Richard
1992-01-01
Simple circuits prevent application of incorrectly polarized or excessive voltages. Connected temporarily or permanently at power-connecting terminals. Devised to protect electrical and electronic equipment installed in spacecraft and subjected to variety of tests in different facilities prior to installation. Basic concept of protective circuits also applied easily to many kinds of electrical and electronic equipment that must be protected against incorrect power connections.
Gettig, Jacob P
2006-04-01
To determine the prevalence of established multiple-choice test-taking correct and incorrect answer cues in the American College of Clinical Pharmacy's Updates in Therapeutics: The Pharmacotherapy Preparatory Course, 2005 Edition, as an equal or lesser surrogate indication of the prevalence of such cues in the Pharmacotherapy board certification examination. All self-assessment and patient case question-and-answer sets were assessed individually to determine if they were subject to selected correct and incorrect answer cues commonly seen in multiple-choice question writing. If the question was considered evaluable, correct answer cues-longest answer, mid-range number, one of two similar choices, and one of two opposite choices-were tallied. In addition, incorrect answer cues- inclusionary language and grammatical mismatch-were also tallied. Each cue was counted if it did what was expected or did the opposite of what was expected. Multiple cues could be identified in each question. A total of 237 (47.7%) of 497 questions in the manual were deemed evaluable. A total of 325 correct answer cues and 35 incorrect answer cues were identified in the 237 evaluable questions. Most evaluable questions contained one to two correct and/or incorrect answer cue(s). Longest answer was the most frequently identified correct answer cue; however, it was the least likely to identify the correct answer. Inclusionary language was the most frequently identified incorrect answer cue. Incorrect answer cues were considerably more likely to identify incorrect answer choices than correct answer cues were able to identify correct answer choices. The use of established multiple-choice test-taking cues is unlikely to be of significant help when taking the Pharmacotherapy board certification examination, primarily because of the lack of questions subject to such cues and the inability of correct answer cues to accurately identify correct answers. Incorrect answer cues, especially the use of inclusionary language, almost always will accurately identify an incorrect answer choice. Assuming that questions in the preparatory course manual were equal or lesser surrogates of those in the board certification examination, it is unlikely that intuition alone can replace adequate preparation and studying as the sole determinant of examination success.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
75 FR 32838 - Reports, Forms, and Recordkeeping Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-09
... Eichelberger, Ph.D., Office of Behavioral Safety Research (NTI-132), 1200 New Jersey Avenue, SE., Washington... Administration (NHTSA) proposes to collect observational data on correct and incorrect use of child restraint... FMVSS 225, Child Restraint Anchorage Systems), in order to provide another, easier method of attaching a...
An AIS-Based E-mail Classification Method
NASA Astrophysics Data System (ADS)
Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi
This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
The Lindstrom-Madden Method for Series Systems with Repeated Components
1984-08-01
Further, numerical evidence indicates (Harris and Soms (1983)) that for a levels of practical significance b() a( (4.4) (4.4) was incorrectly...candidate is the one for which .,1 4 j 4 k, is a maximum. we might expect, by analogy, that for a levels of practical interest
ERIC Educational Resources Information Center
Klopfenstein, Bruce C.
1989-01-01
Describes research that examined the strengths and weaknesses of technological forecasting methods by analyzing forecasting studies made for home video players. The discussion covers assessments and explications of correct and incorrect forecasting assumptions, and their implications for forecasting the adoption of home information technologies…
NASA Astrophysics Data System (ADS)
Jacobson, Erik; Simpson, Amber
2018-04-01
Replication studies play a critical role in scientific accumulation of knowledge, yet replication studies in mathematics education are rare. In this study, the authors replicated Thanheiser's (Educational Studies in Mathematics 75:241-251, 2010) study of prospective elementary teachers' conceptions of multidigit number and examined the main claim that most elementary pre-service teachers think about digits incorrectly at least some of the time. Results indicated no statistically significant difference in the distribution of conceptions between the original and replication samples and, moreover, no statistically significant differences in the distribution of sub-conceptions among prospective teachers with the most common conception. These results suggest confidence is warranted both in the generality of the main claim and in the utility of the conceptions framework for describing prospective elementary teachers' conceptions of multidigit number. The report further contributes a framework for replication of mathematics education research adapted from the field of psychology.
[Cause of death: from primary disease to direct cause of death].
Oppewal, F; Smedts, F M M; Meyboom-de Jong, B
2005-07-23
Following the death of a patient, the treating physician in the Netherlands is required to fill out two forms. Form A, which is the certificate of death and Form B, which is used by the Statistics Netherlands to compile data on causes ofdeath. The latter form often poses difficulty for the physician with respect to the primary cause of death. This applies particularly to cases of sudden death, which account for one third of all deaths in the Netherlands. As a result, the statistical analyses appear to lead to an incorrect representation of the distribution of causes of death. A more thorough investigation into the primary cause of death is desirable, if necessary, supported by a request for an autopsy. The primary cause of death is to be regarded as the basic disease from which the cascade of changes ultimately leading to death originated.
Second-order near-wall turbulence closures - A review
NASA Technical Reports Server (NTRS)
So, R. M. C.; Lai, Y. G.; Zhang, H. S.; Hwang, B. C.
1991-01-01
Advances in second-order near-wall turbulence closures are summarized. All closures under consideration are based on high-Reynolds-number models. Most near-wall closures proposed to date attempt to modify the high-Reynolds-number models for the dissipation function and the pressure redistribution term so that the resultant models are applicable all the way to the wall. The asymptotic behavior of the near-wall closures is examined and compared with the proper near-wall behavior of the exact Reynolds-stress equations. It is found that three second-order near-wall closures give the best correlations with simulated turbulence statistics. However, their predictions of near-wall Reynolds-stress budgets are considered to be incorrect. A proposed modification to the dissipitation-rate equation remedies part of those predictions. It is concluded that further improvements are required if a complete replication of all the turbulence properties and Reynolds-stress budgets by a statistical model of turbulence is desirable.
Astrostatistical Analysis in Solar and Stellar Physics
NASA Astrophysics Data System (ADS)
Stenning, David Craig
This dissertation focuses on developing statistical models and methods to address data-analytic challenges in astrostatistics---a growing interdisciplinary field fostering collaborations between statisticians and astrophysicists. The astrostatistics projects we tackle can be divided into two main categories: modeling solar activity and Bayesian analysis of stellar evolution. These categories from Part I and Part II of this dissertation, respectively. The first line of research we pursue involves classification and modeling of evolving solar features. Advances in space-based observatories are increasing both the quality and quantity of solar data, primarily in the form of high-resolution images. To analyze massive streams of solar image data, we develop a science-driven dimension reduction methodology to extract scientifically meaningful features from images. This methodology utilizes mathematical morphology to produce a concise numerical summary of the magnetic flux distribution in solar "active regions'' that (i) is far easier to work with than the source images, (ii) encapsulates scientifically relevant information in a more informative manner than existing schemes (i.e., manual classification schemes), and (iii) is amenable to sophisticated statistical analyses. In a related line of research, we perform a Bayesian analysis of the solar cycle using multiple proxy variables, such as sunspot numbers. We take advantage of patterns and correlations among the proxy variables to model solar activity using data from proxies that have become available more recently, while also taking advantage of the long history of observations of sunspot numbers. This model is an extension of the Yu et al. (2012) Bayesian hierarchical model for the solar cycle that used the sunspot numbers alone. Since proxies have different temporal coverage, we devise a multiple imputation scheme to account for missing data. We find that incorporating multiple proxies reveals important features of the solar cycle that are missed when the model is fit using only the sunspot numbers. In Part II of this dissertation we focus on two related lines of research involving Bayesian analysis of stellar evolution. We first focus on modeling multiple stellar populations in star clusters. It has long been assumed that all star clusters are comprised of single stellar populations---stars that formed at roughly the same time from a common molecular cloud. However, recent studies have produced evidence that some clusters host multiple populations, which has far-reaching scientific implications. We develop a Bayesian hierarchical model for multiple-population star clusters, extending earlier statistical models of stellar evolution (e.g., van Dyk et al. 2009, Stein et al. 2013). We also devise an adaptive Markov chain Monte Carlo algorithm to explore the complex posterior distribution. We use numerical studies to demonstrate that our method can recover parameters of multiple-population clusters, and also show how model misspecification can be diagnosed. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We also explore statistical properties of the estimators and determine that the influence of the prior distribution does not diminish with larger sample sizes, leading to non-standard asymptotics. In a final line of research, we present the first-ever attempt to estimate the carbon fraction of white dwarfs. This quantity has important implications for both astrophysics and fundamental nuclear physics, but is currently unknown. We use a numerical study to demonstrate that assuming an incorrect value for the carbon fraction leads to incorrect white-dwarf ages of star clusters. Finally, we present our attempt to estimate the carbon fraction of the white dwarfs in the well-studied star cluster 47 Tucanae.
Sjölin, Maria; Edmund, Jens Morgenthaler
2016-07-01
Dynamic treatment planning algorithms use a dosimetric leaf separation (DLS) parameter to model the multi-leaf collimator (MLC) characteristics. Here, we quantify the dosimetric impact of an incorrect DLS parameter and investigate whether common pretreatment quality assurance (QA) methods can detect this effect. 16 treatment plans with intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT) technique for multiple treatment sites were calculated with a correct and incorrect setting of the DLS, corresponding to a MLC gap difference of 0.5mm. Pretreatment verification QA was performed with a bi-planar diode array phantom and the electronic portal imaging device (EPID). Measurements were compared to the correct and incorrect planned doses using gamma evaluation with both global (G) and local (L) normalization. Correlation, specificity and sensitivity between the dose volume histogram (DVH) points for the planning target volume (PTV) and the gamma passing rates were calculated. The change in PTV and organs at risk DVH parameters were 0.4-4.1%. Good correlation (>0.83) between the PTVmean dose deviation and measured gamma passing rates was observed. Optimal gamma settings with 3%L/3mm (per beam and composite plan) and 3%G/2mm (composite plan) for the diode array phantom and 2%G/2mm (composite plan) for the EPID system were found. Global normalization and per beam ROC analysis of the diode array phantom showed an area under the curve <0.6. A DLS error can worsen pretreatment QA using gamma analysis with reasonable credibility for the composite plan. A low detectability was demonstrated for a 3%G/3mm per beam gamma setting. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Management of Childhood Illness at Health Facilities in Benin: Problems and Their Causes
Rowe, Alexander K.; Onikpo, Faustin; Lama, Marcel; Cokou, Francois; Deming, Michael S.
2001-01-01
Objectives. To prepare for the implementation of Integrated Management of Childhood Illness (IMCI) in Benin, we studied the management of ill children younger than 5 years at outpatient health facilities. Methods. We observed a representative sample of consultations; after each consultation, we interviewed caregivers and reexamined children. Health workers' performance was evaluated against IMCI guidelines. To identify determinants of performance, statistical modeling was performed and 6 focus groups with health workers were conducted to solicit their opinions. Results. Altogether, 584 children were enrolled and 101 health workers were observed; 130 health workers participated in focus group discussions. Many serious deficiencies were found: incomplete assessment of children's signs and symptoms, incorrect diagnosis and treatment of potentially life-threatening illnesses, inappropriate prescription of dangerous sedatives, missed opportunities to vaccinate, and failure to refer severely ill children for hospitalization. Quantitative and qualitative analyses showed various health facility–, health worker–, caregiver-, and child-related factors as possible determinants of health worker performance. Conclusions. Action is urgently needed. Our results suggest that to improve health care delivery, interventions should target both the health system and the community level. PMID:11574325
Inverse Optimization: A New Perspective on the Black-Litterman Model.
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch
2012-12-11
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.
Numerical Investigations of Moisture Distribution in a Selected Anisotropic Soil Medium
NASA Astrophysics Data System (ADS)
Iwanek, M.
2018-01-01
The moisture of soil profile changes both in time and space and depends on many factors. Changes of the quantity of water in soil can be determined on the basis of in situ measurements, but numerical methods are increasingly used for this purpose. The quality of the results obtained using pertinent software packages depends on appropriate description and parameterization of soil medium. Thus, the issue of providing for the soil anisotropy phenomenon gains a big importance. Although anisotropy can be taken into account in many numerical models, isotopic soil is often assumed in the research process. However, this assumption can be a reason for incorrect results in the simulations of water changes in soil medium. In this article, results of numerical simulations of moisture distribution in the selected soil profile were presented. The calculations were conducted assuming isotropic and anisotropic conditions. Empirical verification of the results obtained in the numerical investigations indicated statistical essential discrepancies for the both analyzed conditions. However, better fitting measured and calculated moisture values was obtained for the case of providing for anisotropy in the simulation model.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K
2018-02-01
In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.
Hagler, Megan M.; Freeman, Mary C.; Wenger, Seth J.; Freeman, Byron J.; Rakes, Patrick L.; Shute, J.R.
2011-01-01
Rarely encountered animals may be present but undetected, potentially leading to incorrect assumptions about the persistence of a local population or the conservation priority of a particular area. The federally endangered and narrowly endemic Conasauga logperch (Percina jenkinsi) is a good example of a rarely encountered fish species of conservation concern, for which basic population statistics are lacking. We evaluated the occurrence frequency for this species using surveys conducted with a repeat-observation sampling approach during the summer of 2008. We also analyzed museum records since the late 1980s to evaluate the trends in detected status through time. The results of these analyses provided support for a declining trend in this species over a portion of its historical range, despite low estimated detection probability. We used the results to identify the expected information return for a given level of monitoring where the sampling approach incorporates incomplete detection. The method applied here may be of value where historic occurrence records are available, provided that the assumption of constant capture efficiency is reasonable.
An Overview of Biomolecular Event Extraction from Scientific Documents
Vanegas, Jorge A.; Matos, Sérgio; González, Fabio; Oliveira, José L.
2015-01-01
This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed. PMID:26587051
Automatic initialization and quality control of large-scale cardiac MRI segmentations.
Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F
2018-01-01
Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Shafie, Mensur; Muzeyin, Kedija; Worku, Yoseph; Martín-Aragón, Sagrario
2018-01-01
Background and aim Self-medication (SM) is one part of self-care which is known to contribute to primary health care. If practiced appropriately, it has major benefits for the consumers such as self-reliance and decreased expense. However, inappropriate practice can have potential dangers such as incorrect self-diagnosis, dangerous drug-drug interactions, incorrect manner of administration, incorrect dosage, incorrect choice of therapy, masking of a severe disease, and/or risk of dependence and abuse. The main objective of this study was to assess the prevalence and determinants of the self-medication practice (SMP) in Addis Ababa. Methodology A community based cross-sectional study was conducted among selected households in Addis Ababa from April 2016 to May 2016, with a recall period of two months before its conduction. Trained data collectors were employed to collect the data from the 604 sampled participants using pre-tested and validated questionnaires. Result Among the 604 participants involved in this study, 422 (69.9%) were female and 182 (30.1%) were male and there was a mean age of 41.04 (± 13.45) years. The prevalence of SM in this study was 75.5%. The three most frequently reported ailments were headache 117 (25.7%), abdominal pain 59 (12.9%) and cough 54 (11.8%). The two main reasons for SM were mildness of illness 216 (47.4%) and previous knowledge about the drug 106 (23.2%). The two most frequently consumed medications were paracetamol 92 (20.2%) and traditional remedies 73 (16.0%), while drug retail outlets 319 (83.3%) were the main source of drugs. The two most frequently reported source of drug information were health professionals 174 (45.4%) and experience from previous treatment 82 (21.4%). Moreover, there were statistically significant differences among respondents who reported practicing SM based on income and knowledge about appropriate SMP. Conclusion and recommendation Self-medication was practiced with a range of drugs from the conventional paracetamol and NSAIDs to antimicrobials. Being that the practice of SM is inevitable, health authorities and professionals are highly demanded to educate the public not only on the advantages and disadvantages of SM but on its proper use. PMID:29579074
A usability evaluation of four commercial dental computer-based patient record systems
Thyvalikakath, Thankam P.; Monaco, Valerie; Thambuganipalle, Hima Bindu; Schleyer, Titus
2008-01-01
Background The usability of dental computer-based patient record (CPR) systems has not been studied, despite early evidence that poor usability is a problem for dental CPR system users at multiple levels. Methods The authors conducted formal usability tests of four dental CPR systems by using a purposive sample of four groups of five novice users. The authors measured task outcomes (correctly completed, incorrectly completed and incomplete) in each CPR system while the participants performed nine clinical documentation tasks, as well as the number of usability problems identified in each CPR system and their potential relationship to task outcomes. The authors reviewed the software application design aspects responsible for these usability problems. Results The range for correctly completed tasks was 16 to 64 percent, for incorrectly completed tasks 18 to 38 percent and for incomplete tasks 9 to 47 percent. The authors identified 286 usability problems. The main types were three unsuccessful attempts, negative affect and task incorrectly completed. They also identified six problematic interface and interaction designs that led to usability problems. Conclusion The four dental CPR systems studied have significant usability problems for novice users, resulting in a steep learning curve and potentially reduced system adoption. Clinical Implications The significant number of data entry errors raises concerns about the quality of documentation in clinical practice. PMID:19047669
CLustre: semi-automated lineament clustering for palaeo-glacial reconstruction
NASA Astrophysics Data System (ADS)
Smith, Mike; Anders, Niels; Keesstra, Saskia
2016-04-01
Palaeo glacial reconstructions, or "inversions", using evidence from the palimpsest landscape are increasingly being undertaken with larger and larger databases. Predominant in landform evidence is the lineament (or drumlin) where the biggest datasets number in excess of 50,000 individual forms. One stage in the inversion process requires the identification of lineaments that are generically similar and then their subsequent interpretation in to a coherent chronology of events. Here we present CLustre, a semi-authomated algorithm that clusters lineaments using a locally adaptive, region growing, method. This is initially tested using 1,500 model runs on a synthetic dataset, before application to two case studies (where manual clustering has been undertaken by independent researchers): (1) Dubawnt Lake, Canada and (2) Victoria island, Canada. Results using the synthetic data show that classifications are robust in most scenarios, although specific cases of cross-cutting lineaments may lead to incorrect clusters. Application to the case studies showed a very good match to existing published work, with differences related to limited numbers of unclassified lineaments and parallel cross-cutting lineaments. The value in CLustre comes from the semi-automated, objective, application of a classification method that is repeatable. Once classified, summary statistics of lineament groups can be calculated and then used in the inversion.
Jones, Pete R
2018-05-16
During psychophysical testing, a loss of concentration can cause observers to answer incorrectly, even when the stimulus is clearly perceptible. Such lapses limit the accuracy and speed of many psychophysical measurements. This study evaluates an automated technique for detecting lapses based on body movement (postural instability). Thirty-five children (8-11 years of age) and 34 adults performed a typical psychophysical task (orientation discrimination) while seated on a Wii Fit Balance Board: a gaming device that measures center of pressure (CoP). Incorrect responses on suprathreshold catch trials provided the "reference standard" measure of when lapses in concentration occurred. Children exhibited significantly greater variability in CoP on lapse trials, indicating that postural instability provides a feasible, real-time index of concentration. Limitations and potential applications of this method are discussed.
The effect of S-wave arrival times on the accuracy of hypocenter estimation
Gomberg, J.S.; Shedlock, K.M.; Roecker, S.W.
1990-01-01
We have examined the theoretical basis behind some of the widely accepted "rules of thumb' for obtaining accurate hypocenter estimates that pertain to the use of S phases and illustrate, in a variety of ways, why and when these "rules' are applicable. Most methods used to determine earthquake hypocenters are based on iterative, linearized, least-squares algorithms. We examine the influence of S-phase arrival time data on such algorithms by using the program HYPOINVERSE with synthetic datasets. We conclude that a correctly timed S phase recorded within about 1.4 focal depth's distance from the epicenter can be a powerful constraint on focal depth. Furthermore, we demonstrate that even a single incorrectly timed S phase can result in depth estimates and associated measures of uncertainty that are significantly incorrect. -from Authors
[Diabetic retinopathy complications--12-year retrospective study].
Ignat, Florica; Davidescu, Livia
2002-01-01
It is analyzed, on a retrospective study on 12 years, the incidence of diabetus melitus cases, hospitalized in the Ophthalmologic Clinic from Craiova with special mention to the frequency of the diabetic retinopathy, of it's complications and in an accordance to other general diseases, especially cardiovascular's, which contributes to the aggravation of the diabetic ocular in juries evolution. The study underlines the high incidence of the new founded cases with diabetus melitus in complicated diabetes retinopathy stage; the high frequency of ocular complications is explained, according to our statistic facts and through an insufficient treatment, sometimes incorrect and many other cases total neglected by the patients.
Comparison of normalization methods for the analysis of metagenomic gene abundance data.
Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik
2018-04-20
In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead to incorrect or obfuscated biological interpretation.
On the far-IR and sub-mm spectra of spiral galaxies
NASA Technical Reports Server (NTRS)
Stark, A. A.; Davidson, J. A.; Harper, D. A.; Pernic, R.; Loewenstein, R.
1989-01-01
Photometric measurements of three Virgo cluster spirals (NGC4254, NGC4501, and NGC4654) at 160-microns (far-infrared) and 360-microns (submillimeter) wavelengths are compared with theoretical models and observations at other wavelengths. It is shown that the data at the observed wavelengths do not fit any of interstellar dust grain models very well; four possibilities are given in order to explain discrepancies: the observed wavelength points are incorrect; previously observed data is incorrect; both data are incorrect; and the premise of the analysis is incorrect - a composite far-infrared spectrum of normal spiral galaxies is meaningless because they vary considerably in their far-infrared properties. It is also noted that the observed data are inconsistent with models having large cold grains.
Babies and math: A meta-analysis of infants' simple arithmetic competence.
Christodoulou, Joan; Lac, Andrew; Moore, David S
2017-08-01
Wynn's (1992) seminal research reported that infants looked longer at stimuli representing "incorrect" versus "correct" solutions of basic addition and subtraction problems and concluded that infants have innate arithmetical abilities. Since then, infancy researchers have attempted to replicate this effect, yielding mixed findings. The present meta-analysis aimed to systematically compile and synthesize all of the primary replications and extensions of Wynn (1992) that have been conducted to date. The synthesis included 12 studies consisting of 26 independent samples and 550 unique infants. The summary effect, computed using a random-effects model, was statistically significant, d = +0.34, p < .001, suggesting that the phenomenon Wynn originally reported is reliable. Five different tests of publication bias yielded mixed results, suggesting that while a moderate level of publication bias is probable, the summary effect would be positive even after accounting for this issue. Out of the 10 metamoderators tested, none were found to be significant, but most of the moderator subgroups were significantly different from a null effect. Although this meta-analysis provides support for Wynn's original findings, further research is warranted to understand the underlying mechanisms responsible for infants' visual preferences for "mathematically incorrect" test stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Non-ignorable missingness in logistic regression.
Wang, Joanna J J; Bartlett, Mark; Ryan, Louise
2017-08-30
Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non-ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non-identifiable under non-ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow-up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality-of-life. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Richards-Babb, Michelle; Curtis, Reagan; Georgieva, Zomitsa; Penn, John H
2015-11-10
Use of online homework as a formative assessment tool for organic chemistry coursework was examined. Student perceptions of online homework in terms of (i) its ranking relative to other course aspects, (ii) their learning of organic chemistry, and (iii) whether it improved their study habits and how students used it as a learning tool were investigated. Our students perceived the online homework as one of the more useful course aspects for learning organic chemistry content. We found a moderate and statistically significant correlation between online homework performance and final grade. Gender as a variable was ruled out since significant gender differences in overall attitude toward online homework use and course success rates were not found. Our students expressed relatively positive attitudes toward use of online homework with a majority indicating improved study habits (e.g., study in a more consistent manner). Our students used a variety of resources to remediate incorrect responses (e.g., class materials, general online materials, and help from others). However, 39% of our students admitted to guessing at times, instead of working to remediate incorrect responses. In large enrollment organic chemistry courses, online homework may act to bridge the student-instructor gap by providing students with a supportive mechanism for regulated learning of content.
Looking and touching: What extant approaches reveal about the structure of early word knowledge
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2014-01-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants’ responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. PMID:25444711
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-01
... Restraint Anchorage Systems), in order to provide another, easier method of attaching a child restraint to... take to improve child passenger safety. In addition, NTHSA will publish the findings of this research... observational data on correct and incorrect use of child restraint systems in passenger vehicles, as well as...
ERIC Educational Resources Information Center
Palpacuer-Lee, Christelle; Curtis, Jessie Hutchison
2017-01-01
Now more than ever, teachers of world languages are encouraged to become intercultural mediators in their communities and classrooms. This study describes the impact of an innovative community-based teacher education program for developing participants' interculturality. Building on narrative methods of investigation, we explore the potential of…
Cirković, Ivana; Hauschild, Tomasz; Jezek, Petr; Dimitrijević, Vladimir; Vuković, Dragana; Stepanović, Srdjan
2008-08-01
This study evaluated the performance of the BD Phoenix system for the identification (ID) and antimicrobial susceptibility testing (AST) of Staphylococcus vitulinus. Of the 10 S. vitulinus isolates included in the study, 2 were obtained from the Czech Collection of Microorganisms, 5 from the environment, 2 from human clinical samples, and 1 from an animal source. The results of conventional biochemical and molecular tests were used for the reference method for ID, while antimicrobial susceptibility testing performed in accordance with Clinical and Laboratory Standards Institute recommendations and PCR for the mecA gene were the reference for AST. Three isolates were incorrectly identified by the BD Phoenix system; one of these was incorrectly identified to the genus level, and two to the species level. The results of AST by the BD Phoenix system were in agreement with those by the reference method used. While the results of susceptibility testing compared favorably, the 70% accuracy of the Phoenix system for identification of this unusual staphylococcal species was not fully satisfactory.
Cadarette, Suzanne M; Dickson, Leigh; Gignac, Monique A M; Beaton, Dorcas E; Jaglal, Susan B; Hawker, Gillian A
2007-06-18
The ability to locate those sampled has important implications for response rates and thus the success of survey research. The purpose of this study was to examine predictors of locating women requiring tracing using publicly available methods (primarily Internet searches), and to determine the additional benefit of vital statistics linkages. Random samples of women aged 65-89 years residing in two regions of Ontario, Canada were selected from a list of those who completed a questionnaire between 1995 and 1997 (n = 1,500). A random sample of 507 of these women had been searched on the Internet as part of a feasibility pilot in 2001. All 1,500 women sampled were mailed a newsletter and information letter prior to recruitment by telephone in 2003 and 2004. Those with returned mail or incorrect telephone number(s) required tracing. Predictors of locating women were examined using logistic regression. Tracing was required for 372 (25%) of the women sampled, and of these, 181 (49%) were located. Predictors of locating women were: younger age, residing in less densely populated areas, having had a web-search completed in 2001, and listed name identified on the Internet prior to recruitment in 2003. Although vital statistics linkages to death records subsequently identified 41 subjects, these data were incomplete. Prospective studies may benefit from using Internet resources at recruitment to determine the listed names for telephone numbers thereby facilitating follow-up tracing and improving response rates. Although vital statistics linkages may help to identify deceased individuals, these may be best suited for post hoc response rate adjustment.
NASA Astrophysics Data System (ADS)
Halperin, D.; Hart, R. E.; Fuelberg, H. E.; Cossuth, J.
2013-12-01
Predicting tropical cyclone (TC) genesis has been a vexing problem for forecasters. While the literature describes environmental conditions which are necessary for TC genesis, predicting if and when a specific disturbance will organize and become a TC remains a challenge. As recently as 5-10 years ago, global models possessed little if any skill in forecasting TC genesis. However, due to increased resolution and more advanced model parameterizations, we have reached the point where global models can provide useful TC genesis guidance to operational forecasters. A recent study evaluated five global models' ability to predict TC genesis out to four days over the North Atlantic basin (Halperin et al. 2013). The results indicate that the models are indeed able to capture the genesis time and location correctly a fair percentage of the time. The study also uncovered model biases. For example, probability of detection and false alarm rate varies spatially within the basin. Also, as expected, the models' performance decreases with increasing lead time. In order to explain these and other biases, it is useful to analyze the model-indicated genesis events further to determine whether or not there are systematic differences between successful forecasts (hits), false alarms, and miss events. This study will examine composites of a number of physically-relevant environmental parameters (e.g., magnitude of vertical wind shear, aerially averaged mid-level relative humidity) and disturbance-based parameters (e.g., 925 hPa maximum wind speed, vertical alignment of relative vorticity) among each TC genesis event classification (i.e., hit, false alarm, miss). We will use standard statistical tests (e.g., Student's t test, Mann-Whitney-U Test) to calculate whether or not any differences are statistically significant. We also plan to discuss how these composite results apply to a few illustrative case studies. The results may help determine which aspects of the forecast are (in)correct and whether the incorrect aspects can be bias-corrected. This, in turn, may allow us to further enhance probabilistic forecasts of TC genesis.
The event-related potential effects of cognitive conflict in a Chinese character-generation task.
Qiu, Jiang; Zhang, Qinglin; Li, Hong; Luo, Yuejia; Yin, Qinging; Chen, Antao; Yuan, Hong
2007-06-11
High-density event-related potentials were recorded to examine the electrophysiologic correlates of the evaluation of possible answers provided during a Chinese character-generation task. We examined three conditions: the character given was what participants initially generated (Consistent answer), the character given was correct (Unexpected Correct answer), or it was incorrect (Unexpected Incorrect answer). Results showed that Unexpected Correct and Incorrect answers elicited a more negative event-related potential deflection (N320) than did Consistent answers between 300 and 400 ms. Dipole source analysis of difference waves (Unexpected Correct or Incorrect minus Consistent answers) localized the generator of the N320 in the anterior cingulate cortex. The N320 therefore likely reflects the cognitive change or conflict between old and new ways of thinking while identifying and judging characters.
78 FR 4092 - Airworthiness Directives; Cessna Aircraft Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-18
... aircraft's hydraulic power pack wiring for incorrect installation, and if needed, correct the installation... hydraulic power pack wiring for incorrect installation, and if needed, correct the installation. Since...
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
One output function: a misconception of students studying digital systems - a case study
NASA Astrophysics Data System (ADS)
Trotskovsky, E.; Sabag, N.
2015-05-01
Background:Learning processes are usually characterized by students' misunderstandings and misconceptions. Engineering educators intend to help their students overcome their misconceptions and achieve correct understanding of the concept. This paper describes a misconception in digital systems held by many students who believe that combinational logic circuits should have only one output. Purpose:The current study aims to investigate the roots of the misconception about one-output function and the pedagogical methods that can help students overcome the misconception. Sample:Three hundred and eighty-one students in the Departments of Electrical and Electronics and Mechanical Engineering at an academic engineering college, who learned the same topics of a digital combinational system, participated in the research. Design and method:In the initial research stage, students were taught according to traditional method - first to design a one-output combinational logic system, and then to implement a system with a number of output functions. In the main stage, an experimental group was taught using a new method whereby they were shown how to implement a system with several output functions, prior to learning about one-output systems. A control group was taught using the traditional method. In the replication stage (the third stage), an experimental group was taught using the new method. A mixed research methodology was used to examine the results of the new learning method. Results:Quantitative research showed that the new teaching approach resulted in a statistically significant decrease in student errors, and qualitative research revealed students' erroneous thinking patterns. Conclusions:It can be assumed that the traditional teaching method generates an incorrect mental model of the one-output function among students. The new pedagogical approach prevented the creation of an erroneous mental model and helped students develop the correct conceptual understanding.
O'Reilly, Joseph E; Donoghue, Philip C J
2018-03-01
Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.
O’Reilly, Joseph E; Donoghue, Philip C J
2018-01-01
Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675
Bruno, Thiers; Abrahão, Julia
2012-01-01
This study examines the actions taken by operators aimed at preventing and combating information security incidents at a banking organization. The work utilizes the theoretical framework of ergonomics and cognitive psychology. The method is workplace ergonomic analysis. Its focus is directed towards examining the cognitive dimension of the work environment with special attention to the occurrence of correlations between variability in incident frequency and the results of sign detection actions. It categorizes 45,142 operator decisions according to the theory of signal detection (Sternberg, 2000). It analyzes the correlation between incident proportions (indirectly associated with the cognitive efforts demanded from the operator) and operator decisions. The study demonstrated the existence of a positive correlation between incident proportions and false positive decisions (false alarms). However, this correlation could not be observed in relation to decisions of the false-negative type (incorrect rejection).
Clausing, Peter; Robinson, Claire; Burtscher-Schaden, Helmut
2018-03-13
The present paper scrutinises the European authorities' assessment of the carcinogenic hazard posed by glyphosate based on Regulation (EC) 1272/2008. We use the authorities' own criteria as a benchmark to analyse their weight of evidence (WoE) approach. Therefore, our analysis goes beyond the comparison of the assessments made by the European Food Safety Authority and the International Agency for Research on Cancer published by others. We show that not classifying glyphosate as a carcinogen by the European authorities, including the European Chemicals Agency, appears to be not consistent with, and in some instances, a direct violation of the applicable guidance and guideline documents. In particular, we criticise an arbitrary attenuation by the authorities of the power of statistical analyses; their disregard of existing dose-response relationships; their unjustified claim that the doses used in the mouse carcinogenicity studies were too high and their contention that the carcinogenic effects were not reproducible by focusing on quantitative and neglecting qualitative reproducibility. Further aspects incorrectly used were historical control data, multisite responses and progression of lesions to malignancy. Contrary to the authorities' evaluations, proper application of statistical methods and WoE criteria inevitably leads to the conclusion that glyphosate is 'probably carcinogenic' (corresponding to category 1B in the European Union). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.
Optimizing multiple-choice tests as tools for learning.
Little, Jeri L; Bjork, Elizabeth Ligon
2015-01-01
Answering multiple-choice questions with competitive alternatives can enhance performance on a later test, not only on questions about the information previously tested, but also on questions about related information not previously tested-in particular, on questions about information pertaining to the previously incorrect alternatives. In the present research, we assessed a possible explanation for this pattern: When multiple-choice questions contain competitive incorrect alternatives, test-takers are led to retrieve previously studied information pertaining to all of the alternatives in order to discriminate among them and select an answer, with such processing strengthening later access to information associated with both the correct and incorrect alternatives. Supporting this hypothesis, we found enhanced performance on a later cued-recall test for previously nontested questions when their answers had previously appeared as competitive incorrect alternatives in the initial multiple-choice test, but not when they had previously appeared as noncompetitive alternatives. Importantly, however, competitive alternatives were not more likely than noncompetitive alternatives to be intruded as incorrect responses, indicating that a general increased accessibility for previously presented incorrect alternatives could not be the explanation for these results. The present findings, replicated across two experiments (one in which corrective feedback was provided during the initial multiple-choice testing, and one in which it was not), thus strongly suggest that competitive multiple-choice questions can trigger beneficial retrieval processes for both tested and related information, and the results have implications for the effective use of multiple-choice tests as tools for learning.
On the new method for the control of discrete nonlinear dynamic systems using neural networks.
Deng, Hua; Li, Han-Xiong
2006-03-01
This correspondence points out an incorrect statement in Adetona et al, 2000, and Adetona et al., 2004, about the application of the proposed control law to nonminimum phase systems. A counterexample shows the limitations of the control law and, furthermore, its control capability to nonminimum phase systems is explained.
Inserting Mastered Targets during Error Correction When Teaching Skills to Children with Autism
ERIC Educational Resources Information Center
Plaisance, Lauren; Lerman, Dorothea C.; Laudont, Courtney; Wu, Wai-Ling
2016-01-01
Research has identified a variety of effective approaches for responding to errors during discrete-trial training. In one commonly used method, the therapist delivers a prompt contingent on the occurrence of an incorrect response and then re-presents the trial so that the learner has an opportunity to perform the correct response independently.…
Spotting Incorrect Rules in Signed-Number Arithmetic by the Individual Consistency Index.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
Criterion-referenced testing is an important area in the theory and practice of educational measurement. This study demonstrated that even these tests must be closely examined for construct validity. The dimensionality of a dataset will be affected by the examinee's cognitive processes as well as by the nature of the content domain. The methods of…
15 CFR 16.4 - Finding of need to establish a specification for labeling a consumer product.
Code of Federal Regulations, 2010 CFR
2010-01-01
... difficulty experienced by consumers in making informed purchase decisions because of a lack of knowledge... to consumers as a result of an incorrect decision based on an inadequate understanding of the... responding to paragraph (b)(6) of this section, that such test methods are suitable for making objective...
High School Teachers' Perceptions of the Integration of Instructional Technology in the Classroom
ERIC Educational Resources Information Center
Hertzler, Karen S.
2010-01-01
Many state technology standards, goals, and objectives affirm technology will improve student progress. Regardless of the claim, the statement that "teachers are good or bad, not because they are made of meat and bones or electronic circuits, but because they apply correctly or incorrectly teaching methods that are or are not relevant to the…
Use of genetic algorithm for the selection of EEG features
NASA Astrophysics Data System (ADS)
Asvestas, P.; Korda, A.; Kostopoulos, S.; Karanasiou, I.; Ouzounoglou, A.; Sidiropoulos, K.; Ventouras, E.; Matsopoulos, G.
2015-09-01
Genetic Algorithm (GA) is a popular optimization technique that can detect the global optimum of a multivariable function containing several local optima. GA has been widely used in the field of biomedical informatics, especially in the context of designing decision support systems that classify biomedical signals or images into classes of interest. The aim of this paper is to present a methodology, based on GA, for the selection of the optimal subset of features that can be used for the efficient classification of Event Related Potentials (ERPs), which are recorded during the observation of correct or incorrect actions. In our experiment, ERP recordings were acquired from sixteen (16) healthy volunteers who observed correct or incorrect actions of other subjects. The brain electrical activity was recorded at 47 locations on the scalp. The GA was formulated as a combinatorial optimizer for the selection of the combination of electrodes that maximizes the performance of the Fuzzy C Means (FCM) classification algorithm. In particular, during the evolution of the GA, for each candidate combination of electrodes, the well-known (Σ, Φ, Ω) features were calculated and were evaluated by means of the FCM method. The proposed methodology provided a combination of 8 electrodes, with classification accuracy 93.8%. Thus, GA can be the basis for the selection of features that discriminate ERP recordings of observations of correct or incorrect actions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, Wenjun; Subotnik, Joseph E., E-mail: subotnik@sas.upenn.edu
2014-05-28
In this article, we consider the intrinsic entropy of Tully's fewest switches surface hopping (FSSH) algorithm (as estimated by the impurity of the density matrix) [J. Chem. Phys. 93, 1061 (1990)]. We show that, even for a closed system, the total impurity of a FSSH calculation increases in time (rather than stays constant). This apparent failure of the FSSH algorithm can be traced back to an incorrect, approximate treatment of the electronic coherence between wavepackets moving along different potential energy surfaces. This incorrect treatment of electronic coherence also prevents the FSSH algorithm from correctly describing wavepacket recoherences (which is amore » well established limitation of the FSSH method). Nevertheless, despite these limitations, the FSSH algorithm often predicts accurate observables because the electronic coherence density is modulated by a phase factor which varies rapidly in phase space and which often integrates to almost zero. Adding “decoherence” events on top of a FSSH calculation completely destroys the incorrect FSSH electronic coherence and effectively sets the Poincaré recurrence time for wavepacket recoherence to infinity; this modification usually increases FSSH accuracy (assuming there are no recoherences) while also offering long-time stability for trajectories. In practice, we show that introducing “decoherence” events does not change the total FSSH impurity significantly, but does lead to more accurate evaluations of the impurity of the electronic subsystem.« less
Knowledge of pediatricians regarding physical activity in childhood and adolescence
Gordia, Alex Pinheiro; de Quadros, Teresa Maria Bianchini; Silva, Luciana Rodrigues; dos Santos, Gilton Marques
2015-01-01
Objective: To investigate the knowledge and guidance given by pediatricians regarding physical activity in childhood and adolescence. Methods: A cross-sectional study involving a convenience sample of pediatricians (n=210) who participated in a national pediatrics congress in 2013. Sociodemographic and professional data and data regarding habitual physical activity and pediatricians’ knowledge and instructions for young people regarding physical activity were collected using a questionnaire. Absolute and relative frequencies and means and standard deviations were calculated. Results: Most pediatricians were females, had graduated from medical school more than 15 years ago, and had residency in pediatrics. More than 70% of the participants reported to include physical activity guidance in their prescriptions. On the other hand, approximately two-thirds of the pediatricians incorrectly reported that children should not work out and less than 15% answered the question about physical activity barriers correctly. With respect to the two questions about physical activity to tackle obesity, incorrect answers were marked by more than 50% of the pediatricians. Most participants incorrectly reported that 30 min should be the minimum daily time of physical activity in young people. Less than 40% of the pediatricians correctly indicated the maximum time young people should spend in front of a screen. Conclusions: In general, the pediatricians reported that they recommend physical activity to their young patients, but specific knowledge of this topic was limited. Programs providing adequate information are needed. PMID:26298654
Strategies for Efficient Computation of the Expected Value of Partial Perfect Information
Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.
2014-01-01
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434
Austin, Peter C
2014-03-30
Propensity score methods are increasingly being used to estimate causal treatment effects in observational studies. In medical and epidemiological studies, outcomes are frequently time-to-event in nature. Propensity-score methods are often applied incorrectly when estimating the effect of treatment on time-to-event outcomes. This article describes how two different propensity score methods (matching and inverse probability of treatment weighting) can be used to estimate the measures of effect that are frequently reported in randomized controlled trials: (i) marginal survival curves, which describe survival in the population if all subjects were treated or if all subjects were untreated; and (ii) marginal hazard ratios. The use of these propensity score methods allows one to replicate the measures of effect that are commonly reported in randomized controlled trials with time-to-event outcomes: both absolute and relative reductions in the probability of an event occurring can be determined. We also provide guidance on variable selection for the propensity score model, highlight methods for assessing the balance of baseline covariates between treated and untreated subjects, and describe the implementation of a sensitivity analysis to assess the effect of unmeasured confounding variables on the estimated treatment effect when outcomes are time-to-event in nature. The methods in the paper are illustrated by estimating the effect of discharge statin prescribing on the risk of death in a sample of patients hospitalized with acute myocardial infarction. In this tutorial article, we describe and illustrate all the steps necessary to conduct a comprehensive analysis of the effect of treatment on time-to-event outcomes. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)
2015-01-01
A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Field, Ella Suzanne; Bellum, John Curtis; Kletecka, Damon E.
When an optical coating is damaged, deposited incorrectly, or is otherwise unsuitable, the conventional method to restore the optic often entails repolishing the optic surface, which can incur a large cost and long lead time. We propose three alternative options to repolishing, including (i) burying the unsuitable coating under another optical coating, (ii) using ion milling to etch the unsuitable coating completely from the optic surface, and then recoating the optic, and (iii) using ion milling to etch through a number of unsuitable layers, leaving the rest of the coating intact, and then recoating the layers that were etched. Repairsmore » were made on test optics with dielectric mirror coatings according to the above three options. The mirror coatings to be repaired were quarter wave stacks of HfO 2 and SiO 2 layers for high reflection at 1054 nm at 45° incidence in P-polarization. One of the coating layers was purposely deposited incorrectly as Hf metal instead of HfO 2 to evaluate the ability of each repair method to restore the coating’s high laser-induced damage threshold (LIDT) of 64.0 J/cm 2. Finally, the repaired coating with the highest resistance to laser-induced damage was achieved using repair method (ii) with an LIDT of 49.0 – 61.0 J/cm 2.« less
NASA Astrophysics Data System (ADS)
Field, Ella S.; Bellum, John C.; Kletecka, Damon E.
2017-01-01
When an optical coating is damaged, deposited incorrectly, or is otherwise unsuitable, the conventional method to restore the optic often entails repolishing the optic surface, which can incur a large cost and long lead time. We propose three alternative options to repolishing, including (i) burying the unsuitable coating under another optical coating, (ii) using ion milling to etch the unsuitable coating completely from the optic surface and then recoating the optic, and (iii) using ion milling to etch through a number of unsuitable layers, leaving the rest of the coating intact, and then recoating the layers that were etched. Repairs were made on test optics with dielectric mirror coatings according to the above three options. The mirror coatings to be repaired were quarter wave stacks of HfO2 and SiO2 layers for high reflection at 1054 nm at 45 deg incidence in P-polarization. One of the coating layers was purposely deposited incorrectly as Hf metal instead of HfO2 to evaluate the ability of each repair method to restore the coating's high laser-induced damage threshold (LIDT) of 64.0 J/cm2. The repaired coating with the highest resistance to laser-induced damage was achieved using repair method (ii) with an LIDT of 49.0 to 61.0 J/cm2.
Richter, Jacob T.; Sloss, Brian L.; Isermann, Daniel A.
2016-01-01
Previous research has generally ignored the potential effects of spawning habitat availability and quality on recruitment of Walleye Sander vitreus, largely because information on spawning habitat is lacking for many lakes. Furthermore, traditional transect-based methods used to describe habitat are time and labor intensive. Our objectives were to determine if side-scan sonar could be used to accurately classify Walleye spawning habitat in the nearshore littoral zone and provide lakewide estimates of spawning habitat availability similar to estimates obtained from a transect–quadrat-based method. Based on assessments completed on 16 northern Wisconsin lakes, interpretation of side-scan sonar images resulted in correct identification of substrate size-class for 93% (177 of 191) of selected locations and all incorrect classifications were within ± 1 class of the correct substrate size-class. Gravel, cobble, and rubble substrates were incorrectly identified from side-scan images in only two instances (1% misclassification), suggesting that side-scan sonar can be used to accurately identify preferred Walleye spawning substrates. Additionally, we detected no significant differences in estimates of lakewide littoral zone substrate compositions estimated using side-scan sonar and a traditional transect–quadrat-based method. Our results indicate that side-scan sonar offers a practical, accurate, and efficient technique for assessing substrate composition and quantifying potential Walleye spawning habitat in the nearshore littoral zone of north temperate lakes.
Field, Ella S.; Bellum, John C.; Kletecka, Damon E.
2016-07-08
Here, when an optical coating is damaged, deposited incorrectly, or is otherwise unsuitable, the conventional method to restore the optic often entails repolishing the optic surface, which can incur a large cost and long lead time. We propose three alternative options to repolishing, including (i) burying the unsuitable coating under another optical coating, (ii) using ion milling to etch the unsuitable coating completely from the optic surface and then recoating the optic, and (iii) using ion milling to etch through a number of unsuitable layers, leaving the rest of the coating intact, and then recoating the layers that were etched.more » Repairs were made on test optics with dielectric mirror coatings according to the above three options. The mirror coatings to be repaired were quarter wave stacks of HfO 2 and SiO 2 layers for high reflection at 1054 nm at 45 deg incidence in P-polarization. One of the coating layers was purposely deposited incorrectly as Hf metal instead of HfO2 to evaluate the ability of each repair method to restore the coating’s high laser-induced damage threshold (LIDT) of 64.0 J/cm 2. The repaired coating with the highest resistance to laser-induced damage was achieved using repair method (ii) with an LIDT of 49.0 to 61.0 J/cm 2.« less
Field, Ella Suzanne; Bellum, John Curtis; Kletecka, Damon E.
2016-06-01
When an optical coating is damaged, deposited incorrectly, or is otherwise unsuitable, the conventional method to restore the optic often entails repolishing the optic surface, which can incur a large cost and long lead time. We propose three alternative options to repolishing, including (i) burying the unsuitable coating under another optical coating, (ii) using ion milling to etch the unsuitable coating completely from the optic surface, and then recoating the optic, and (iii) using ion milling to etch through a number of unsuitable layers, leaving the rest of the coating intact, and then recoating the layers that were etched. Repairsmore » were made on test optics with dielectric mirror coatings according to the above three options. The mirror coatings to be repaired were quarter wave stacks of HfO 2 and SiO 2 layers for high reflection at 1054 nm at 45° incidence in P-polarization. One of the coating layers was purposely deposited incorrectly as Hf metal instead of HfO 2 to evaluate the ability of each repair method to restore the coating’s high laser-induced damage threshold (LIDT) of 64.0 J/cm 2. Finally, the repaired coating with the highest resistance to laser-induced damage was achieved using repair method (ii) with an LIDT of 49.0 – 61.0 J/cm 2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Field, Ella S.; Bellum, John C.; Kletecka, Damon E.
Here, when an optical coating is damaged, deposited incorrectly, or is otherwise unsuitable, the conventional method to restore the optic often entails repolishing the optic surface, which can incur a large cost and long lead time. We propose three alternative options to repolishing, including (i) burying the unsuitable coating under another optical coating, (ii) using ion milling to etch the unsuitable coating completely from the optic surface and then recoating the optic, and (iii) using ion milling to etch through a number of unsuitable layers, leaving the rest of the coating intact, and then recoating the layers that were etched.more » Repairs were made on test optics with dielectric mirror coatings according to the above three options. The mirror coatings to be repaired were quarter wave stacks of HfO 2 and SiO 2 layers for high reflection at 1054 nm at 45 deg incidence in P-polarization. One of the coating layers was purposely deposited incorrectly as Hf metal instead of HfO2 to evaluate the ability of each repair method to restore the coating’s high laser-induced damage threshold (LIDT) of 64.0 J/cm 2. The repaired coating with the highest resistance to laser-induced damage was achieved using repair method (ii) with an LIDT of 49.0 to 61.0 J/cm 2.« less
In/Out Status Monitoring in Mobile Asset Tracking with Wireless Sensor Networks
Kim, Kwangsoo; Chung, Chin-Wan
2010-01-01
A mobile asset with a sensor node in a mobile asset tracking system moves around a monitoring area, leaves it, and then returns to the region repeatedly. The system monitors the in/out status of the mobile asset. Due to the continuous movement of the mobile asset, the system may generate an error for the in/out status of the mobile asset. When the mobile asset is inside the region, the system might determine that it is outside, or vice versa. In this paper, we propose a method to detect and correct the incorrect in/out status of the mobile asset. To solve this problem, our approach uses data about the connection state transition and the battery lifetime of the mobile node attached to the mobile asset. The connection state transition is used to classify the mobile node as normal or abnormal. The battery lifetime is used to predict a valid working period for the mobile node. We evaluate our method using real data generated by a medical asset tracking system. The experimental results show that our method, by using the estimated battery life time or by using the invalid connection state, can detect and correct most cases of incorrect in/out statuses generated by the conventional approach. PMID:22319268
In/out status monitoring in mobile asset tracking with wireless sensor networks.
Kim, Kwangsoo; Chung, Chin-Wan
2010-01-01
A mobile asset with a sensor node in a mobile asset tracking system moves around a monitoring area, leaves it, and then returns to the region repeatedly. The system monitors the in/out status of the mobile asset. Due to the continuous movement of the mobile asset, the system may generate an error for the in/out status of the mobile asset. When the mobile asset is inside the region, the system might determine that it is outside, or vice versa. In this paper, we propose a method to detect and correct the incorrect in/out status of the mobile asset. To solve this problem, our approach uses data about the connection state transition and the battery lifetime of the mobile node attached to the mobile asset. The connection state transition is used to classify the mobile node as normal or abnormal. The battery lifetime is used to predict a valid working period for the mobile node. We evaluate our method using real data generated by a medical asset tracking system. The experimental results show that our method, by using the estimated battery life time or by using the invalid connection state, can detect and correct most cases of incorrect in/out statuses generated by the conventional approach.
StaRProtein, A Web Server for Prediction of the Stability of Repeat Proteins
Xu, Yongtao; Zhou, Xu; Huang, Meilan
2015-01-01
Repeat proteins have become increasingly important due to their capability to bind to almost any proteins and the potential as alternative therapy to monoclonal antibodies. In the past decade repeat proteins have been designed to mediate specific protein-protein interactions. The tetratricopeptide and ankyrin repeat proteins are two classes of helical repeat proteins that form different binding pockets to accommodate various partners. It is important to understand the factors that define folding and stability of repeat proteins in order to prioritize the most stable designed repeat proteins to further explore their potential binding affinities. Here we developed distance-dependant statistical potentials using two classes of alpha-helical repeat proteins, tetratricopeptide and ankyrin repeat proteins respectively, and evaluated their efficiency in predicting the stability of repeat proteins. We demonstrated that the repeat-specific statistical potentials based on these two classes of repeat proteins showed paramount accuracy compared with non-specific statistical potentials in: 1) discriminate correct vs. incorrect models 2) rank the stability of designed repeat proteins. In particular, the statistical scores correlate closely with the equilibrium unfolding free energies of repeat proteins and therefore would serve as a novel tool in quickly prioritizing the designed repeat proteins with high stability. StaRProtein web server was developed for predicting the stability of repeat proteins. PMID:25807112
The voice of conscience: neural bases of interpersonal guilt and compensation
Yu, Hongbo; Hu, Jie; Hu, Li
2014-01-01
People feel bad for inflicting harms upon others; this emotional state is termed interpersonal guilt. In this study, the participant played multiple rounds of a dot-estimation task with anonymous partners while undergoing fMRI. The partner would receive pain stimulation if the partner or the participant or both responded incorrectly; the participant was then given the option to intervene and bear a proportion of pain for the partner. The level of pain voluntarily taken and the activations in anterior middle cingulate cortex (aMCC) and bilateral anterior insula (AI) were higher when the participant was solely responsible for the stimulation (Self_Incorrect) than when both committed an error (Both_Incorrect). Moreover, the gray matter volume in the aMCC predicted the individual’s compensation behavior, measured as the difference between the level of pain taken in the Self_Incorrect and Both_Incorrect conditions. Furthermore, a mediation pathway analysis revealed that activation in a midbrain region mediated the relationship between aMCC activation and the individual’s tendency to compensate. These results demonstrate that the aMCC and the midbrain nucleus not only play an important role in experiencing interpersonal guilt, but also contribute to compensation behavior. PMID:23893848
ERP correlates of German Sign Language processing in deaf native signers.
Hänel-Faulhaber, Barbara; Skotara, Nils; Kügow, Monique; Salden, Uta; Bottari, Davide; Röder, Brigitte
2014-05-10
The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.
ERP correlates of German Sign Language processing in deaf native signers
2014-01-01
Background The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Results Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. Conclusions ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language. PMID:24884527
Trickey, Amber W; Crosby, Moira E; Singh, Monika; Dort, Jonathan M
2014-12-01
The application of evidence-based medicine to patient care requires unique skills of the physician. Advancing residents' abilities to accurately evaluate the quality of evidence is built on understanding of fundamental research concepts. The American Board of Surgery In-Training Examination (ABSITE) provides a relevant measure of surgical residents' knowledge of research design and statistics. We implemented a research education curriculum in an independent academic medical center general residency program, and assessed the effect on ABSITE scores. The curriculum consisted of five 1-hour monthly research and statistics lectures. The lectures were presented before the 2012 and 2013 examinations. Forty residents completing ABSITE examinations from 2007 to 2013 were included in the study. Two investigators independently identified research-related item topics from examination summary reports. Correct and incorrect responses were compared precurriculum and postcurriculum. Regression models were calculated to estimate improvement in postcurriculum scores, adjusted for individuals' scores over time and postgraduate year level. Residents demonstrated significant improvement in postcurriculum examination scores for research and statistics items. Correct responses increased 27% (P < .001). Residents were 5 times more likely to achieve a perfect score on research and statistics items postcurriculum (P < .001). Residents at all levels demonstrated improved research and statistics scores after receiving the curriculum. Because the ABSITE includes a wide spectrum of research topics, sustained improvements suggest a genuine level of understanding that will promote lifelong evaluation and clinical application of the surgical literature.
Fayyazi Bordbar, Mohammad Reza; Abdollahian, Ebrahim; Samadi, Roya; Dolatabadi, Hamid
2014-11-01
This study was conducted to determine the frequency of anabolic-androgenic steroids consumption in male students studying at the university and their awareness, attitude, and role of sports activities; the present descriptive study was conducted on 271 volunteers in 2008. The data collected by self-report questionnaires was analyzed by descriptive inferential statistics. The prevalence of consumption was 3.3%, and it was significantly higher in those with a history of bodybuilding or athletic performance. The overall awareness rate was low, and the attitude was too optimistic. It seems that unawareness, incorrect attitude, and history of athletic performance increases the risk of consumption.
Vadnais, Sarah A; Kibby, Michelle Y; Jagger-Rickels, Audreyana C
2018-01-01
We identified statistical predictors of four processing speed (PS) components in a sample of 151 children with and without attention-deficit/hyperactivity disorder (ADHD). Performance on perceptual speed was predicted by visual attention/short-term memory, whereas incidental learning/psychomotor speed was predicted by verbal working memory. Rapid naming was predictive of each PS component assessed, and inhibition predicted all but one task, suggesting a shared need to identify/retrieve stimuli rapidly and inhibit incorrect responding across PS components. Hence, we found both shared and unique predictors of perceptual, cognitive, and output speed, suggesting more specific terminology should be used in future research on PS in ADHD.
A Bernoulli Formulation of the Land-Use Portfolio Model
Champion, Richard A.
2008-01-01
Decision making for natural-hazards mitigation can be sketched as knowledge available in advance (a priori), knowledge available later (a posteriori), and how consequences of the mitigation decision might be viewed once future outcomes are known. Two outcomes - mitigating for a hazard event that will occur, and not mitigating for a hazard event that will not occur - can be considered narrowly correct. Two alternative outcomes - mitigating for a hazard event that will not occur, and not mitigating for a hazard event that will occur - can be considered narrowly incorrect. The dilemma facing the decision maker is that mitigation choices must be made before the event, and often must be made with imperfect statistical techniques and imperfect data.
Significance testing as perverse probabilistic reasoning
2011-01-01
Truth claims in the medical literature rely heavily on statistical significance testing. Unfortunately, most physicians misunderstand the underlying probabilistic logic of significance tests and consequently often misinterpret their results. This near-universal misunderstanding is highlighted by means of a simple quiz which we administered to 246 physicians at two major academic hospitals, on which the proportion of incorrect responses exceeded 90%. A solid understanding of the fundamental concepts of probability theory is becoming essential to the rational interpretation of medical information. This essay provides a technically sound review of these concepts that is accessible to a medical audience. We also briefly review the debate in the cognitive sciences regarding physicians' aptitude for probabilistic inference. PMID:21356064
Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.
Fisher, Aaron; Anderson, G. Brooke; Peng, Roger
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457
Genetic methods improve accuracy of gender determination in beaver
Williams, C.L.; Breck, S.W.; Baker, B.W.
2004-01-01
Gender identification of sexually monomorphic mammals can be difficult. We used analysis of zinc-finger protein (Zfx and Zfy) DNA regions to determine gender of 96 beavers (Castor canadensis) from 3 areas and used these results to verify gender determined in the field. Gender was correctly determined for 86 (89.6%) beavers. Incorrect assignments were not attributed to errors in any one age or sex class. Although methods that can be used in the field (such as morphological methods) can provide reasonably accurate gender assignments in beavers, the genetic method might be preferred in certain situations.
SU-E-T-484: In Vivo Dosimetry Tolerances in External Beam Fast Neutron Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L; Gopan, O
Purpose: Optical stimulated luminescence (OSL) dosimetry with Landauer Al2O3:C nanodots was developed at our institution as a passive in vivo dosimetry (IVD) system for patients treated with fast neutron therapy. The purpose of this study was to establish clinically relevant tolerance limits for detecting treatment errors requiring further investigation. Methods: Tolerance levels were estimated by conducting a series of IVD expected dose calculations for square field sizes ranging between 2.8 and 28.8 cm. For each field size evaluated, doses were calculated for open and internal wedged fields with angles of 30°, 45°, or 60°. Theoretical errors were computed for variationsmore » of incorrect beam configurations. Dose errors, defined as the percent difference from the expected dose calculation, were measured with groups of three nanodots placed in a 30 x 30 cm solid water phantom, at beam isocenter (150 cm SAD, 1.7 cm Dmax). The tolerances were applied to IVD patient measurements. Results: The overall accuracy of the nanodot measurements is 2–3% for open fields. Measurement errors agreed with calculated errors to within 3%. Theoretical estimates of dosimetric errors showed that IVD measurements with OSL nanodots will detect the absence of an internal wedge or a wrong wedge angle. Incorrect nanodot placement on a wedged field is more likely to be caught if the offset is in the direction of the “toe” of the wedge where the dose difference in percentage is about 12%. Errors caused by an incorrect flattening filter size produced a 2% measurement error that is not detectable by IVD measurement alone. Conclusion: IVD with nanodots will detect treatment errors associated with the incorrect implementation of the internal wedge. The results of this study will streamline the physicists’ investigations in determining the root cause of an IVD reading that is out of normally accepted tolerances.« less
Remans, Tony; Keunen, Els; Bex, Geert Jan; Smeets, Karen; Vangronsveld, Jaco; Cuypers, Ann
2014-10-01
Reverse transcription-quantitative PCR (RT-qPCR) has been widely adopted to measure differences in mRNA levels; however, biological and technical variation strongly affects the accuracy of the reported differences. RT-qPCR specialists have warned that, unless researchers minimize this variability, they may report inaccurate differences and draw incorrect biological conclusions. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines describe procedures for conducting and reporting RT-qPCR experiments. The MIQE guidelines enable others to judge the reliability of reported results; however, a recent literature survey found low adherence to these guidelines. Additionally, even experiments that use appropriate procedures remain subject to individual variation that statistical methods cannot correct. For example, since ideal reference genes do not exist, the widely used method of normalizing RT-qPCR data to reference genes generates background noise that affects the accuracy of measured changes in mRNA levels. However, current RT-qPCR data reporting styles ignore this source of variation. In this commentary, we direct researchers to appropriate procedures, outline a method to present the remaining uncertainty in data accuracy, and propose an intuitive way to select reference genes to minimize uncertainty. Reporting the uncertainty in data accuracy also serves for quality assessment, enabling researchers and peer reviewers to confidently evaluate the reliability of gene expression data. © 2014 American Society of Plant Biologists. All rights reserved.
Notes on power of normality tests of error terms in regression models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less
On the Mathematical Consequences of Binning Spike Trains.
Cessac, Bruno; Le Ny, Arnaud; Löcherbach, Eva
2017-01-01
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process that is no longer Markov but is instead a variable-length Markov chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains. In particular, we introduce the DLR framework as a natural setting to mathematically formalize anticipation, that is, to tell "how good" our nervous system is at making predictions. In a probabilistic sense, this corresponds to condition a process by its future, and we discuss how binning may affect our conclusions on this ability. We finally comment on the possible consequences of binning in the detection of spurious phase transitions or in the detection of incorrect evidence of criticality.
Looking and touching: what extant approaches reveal about the structure of early word knowledge.
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2015-09-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants' responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. © 2014 The Authors Developmental Science Published by John Wiley & Sons Ltd.
The 100-year flood seems to be changing. Can we really tell?
NASA Astrophysics Data System (ADS)
Ceres, R. L., Jr.; Forest, C. E.; Keller, K.
2017-12-01
Widespread flooding from Hurricane Harvey greatly exceeded the Federal Emergency Management Agency's 100-year flood levels. In the US, this flood level is often used as an important line of demarcation where areas above this level are considered safe, while areas below the line are at risk and require additional flood risk mitigation. In the wake of Harvey's damage, the US media has highlighted at least two important questions. First, has the 100-year flood level changed? Second, is the 100-year flood level a good metric for determining flood risk? To address the first question, we use an Observation System Simulation Experiment of storm surge flood levels and find that gradual changes to the 100-year storm surge level may not be reliably detected over the long lifespans expected of major flood risk mitigation strategies. Additionally, we find that common extreme value analysis models lead to biased results and additional uncertainty when incorrect assumptions are used for the underlying statistical model. These incorrect assumptions can lead to examples of negative learning. Addressing the second question, these findings further challenge the validity of using simple return levels such as the 100-year flood as a decision tool for assessing flood risk. These results indicate risk management strategies must account for such uncertainties to build resilient and robust planning tools that stakeholders desperately need.
Xiang, Ling; Zhang, Baoqiang; Wang, Baoxi; Jiang, Jun; Zhang, Fenghua; Hu, Zhujing
2016-01-01
A prime-target interference task was used to investigate the effects of cognitive aging on reactive and proactive control after eliminating frequency confounds and feature repetitions from the cognitive control measures. We used distributional analyses to explore the dynamics of the two control functions by distinguishing the strength of incorrect response capture and the efficiency of suppression control. For reactive control, within-trial conflict control and between-trial conflict adaption were analyzed. The statistical analysis showed that there were no reliable between-trial conflict adaption effects for either young or older adults. For within-trial conflict control, the results revealed that older adults showed larger interference effects on mean RT and mean accuracy. Distributional analyses showed that the decline mainly stemmed from inefficient suppression rather than from stronger incorrect responses. For proactive control, older adults showed comparable proactive conflict resolution to young adults on mean RT and mean accuracy. Distributional analyses showed that older adults were as effective as younger adults in adjusting their responses based on congruency proportion information to minimize automatic response capture and actively suppress the direct response activation. The results suggest that older adults were less proficient at suppressing interference after conflict was detected but can anticipate and prevent inference in response to congruency proportion manipulation. These results challenge earlier views that older adults have selective deficits in proactive control but intact reactive control. PMID:27847482
Xiang, Ling; Zhang, Baoqiang; Wang, Baoxi; Jiang, Jun; Zhang, Fenghua; Hu, Zhujing
2016-01-01
A prime-target interference task was used to investigate the effects of cognitive aging on reactive and proactive control after eliminating frequency confounds and feature repetitions from the cognitive control measures. We used distributional analyses to explore the dynamics of the two control functions by distinguishing the strength of incorrect response capture and the efficiency of suppression control. For reactive control, within-trial conflict control and between-trial conflict adaption were analyzed. The statistical analysis showed that there were no reliable between-trial conflict adaption effects for either young or older adults. For within-trial conflict control, the results revealed that older adults showed larger interference effects on mean RT and mean accuracy. Distributional analyses showed that the decline mainly stemmed from inefficient suppression rather than from stronger incorrect responses. For proactive control, older adults showed comparable proactive conflict resolution to young adults on mean RT and mean accuracy. Distributional analyses showed that older adults were as effective as younger adults in adjusting their responses based on congruency proportion information to minimize automatic response capture and actively suppress the direct response activation. The results suggest that older adults were less proficient at suppressing interference after conflict was detected but can anticipate and prevent inference in response to congruency proportion manipulation. These results challenge earlier views that older adults have selective deficits in proactive control but intact reactive control.
Acupuncture therapy related cardiac injury.
Li, Xue-feng; Wang, Xian
2013-12-01
Cardiac injury is the most serious adverse event in acupuncture therapy. The causes include needling chest points near the heart, the cardiac enlargement and pericardial effusion that will enlarge the projected area on the body surface and make the proper depth of needling shorter, and the incorrect needling method of the points. Therefore, acupuncture practitioners must be familiar with the points of the heart projected area on the chest and the correct needling methods in order to reduce the risk of acupuncture therapy related cardiac injury.
Selecting the "Best" Factor Structure and Moving Measurement Validation Forward: An Illustration.
Schmitt, Thomas A; Sass, Daniel A; Chappelle, Wayne; Thompson, William
2018-04-09
Despite the broad literature base on factor analysis best practices, research seeking to evaluate a measure's psychometric properties frequently fails to consider or follow these recommendations. This leads to incorrect factor structures, numerous and often overly complex competing factor models and, perhaps most harmful, biased model results. Our goal is to demonstrate a practical and actionable process for factor analysis through (a) an overview of six statistical and psychometric issues and approaches to be aware of, investigate, and report when engaging in factor structure validation, along with a flowchart for recommended procedures to understand latent factor structures; (b) demonstrating these issues to provide a summary of the updated Posttraumatic Stress Disorder Checklist (PCL-5) factor models and a rationale for validation; and (c) conducting a comprehensive statistical and psychometric validation of the PCL-5 factor structure to demonstrate all the issues we described earlier. Considering previous research, the PCL-5 was evaluated using a sample of 1,403 U.S. Air Force remotely piloted aircraft operators with high levels of battlefield exposure. Previously proposed PCL-5 factor structures were not supported by the data, but instead a bifactor model is arguably more statistically appropriate.
ERIC Educational Resources Information Center
Clarke, Jason; Prescott, Katherine; Milne, Rebecca
2013-01-01
Background: The cognitive interview (CI) has been shown to increase correct memory recall of a diverse range of participant types, without an increase in the number of incorrect or confabulated details. However, it has rarely been examined for use with adults with intellectual disability. Measures and Method: This study compared the memory recall…
Erratum: ``Infrared Counterparts to Chandra X-Ray Sources in the Antennae'' (ApJ, 658, 319 [2007])
NASA Astrophysics Data System (ADS)
Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.
2007-10-01
In equation (2), we incorrectly labeled one of the variables. Equations (1) and (2) should read:r1=ax1+by1+c,d1=dx1+ey1+f. (1) (2)These equations were used only to explain our image frame-tie method, and this change does not affect our results.
ERIC Educational Resources Information Center
Tsaousis, Ioannis; Sideridis, Georgios; Al-Saawi, Fahad
2018-01-01
The aim of the present study was to examine Differential Distractor Functioning (DDF) as a means of improving the quality of a measure through understanding biased responses across groups. A DDF analysis could shed light on the potential sources of construct-irrelevant variance by examining whether the differential selection of incorrect choices…
Andrew T. Hudak; Nicholas L. Crookston; Jeffrey S. Evans; David E. hall; Michael J. Falkowski
2009-01-01
The authors regret that an error was discovered in the code within the R software package, yaImpute (Crookston & Finley, 2008), which led to incorrect results reported in the above article. The Most Similar Neighbor (MSN) method computes the distance between reference observations and target observations in a projected space defined using canonical correlation...
Impact of pedagogical method on Brazilian dental students' waste management practice.
Victorelli, Gabriela; Flório, Flávia Martão; Ramacciato, Juliana Cama; Motta, Rogério Heládio Lopes; de Souza Fonseca Silva, Almenara
2014-11-01
The purpose of this study was to conduct a qualitative analysis of waste management practices among a group of Brazilian dental students (n=64) before and after implementing two different pedagogical methods: 1) the students attended a two-hour lecture based on World Health Organization standards; and 2) the students applied the lessons learned in an organized group setting aimed toward raising their awareness about socioenvironmental issues related to waste. All eligible students participated, and the students' learning was evaluated through their answers to a series of essay questions, which were quantitatively measured. Afterwards, the impact of the pedagogical approaches was compared by means of qualitative categorization of wastes generated in clinical activities. Waste categorization was performed for a period of eight consecutive days, both before and thirty days after the pedagogical strategies. In the written evaluation, 80 to 90 percent of the students' answers were correct. The qualitative assessment revealed a high frequency of incorrect waste disposal with a significant increase of incorrect disposal inside general and infectious waste containers (p<0.05). Although the students' theoretical learning improved, it was not enough to change behaviors established by cultural values or to encourage the students to adequately segregate and package waste material.
Rios Piedra, Edgar A; Taira, Ricky K; El-Saden, Suzie; Ellingson, Benjamin M; Bui, Alex A T; Hsu, William
2016-02-01
Brain tumor analysis is moving towards volumetric assessment of magnetic resonance imaging (MRI), providing a more precise description of disease progression to better inform clinical decision-making and treatment planning. While a multitude of segmentation approaches exist, inherent variability in the results of these algorithms may incorrectly indicate changes in tumor volume. In this work, we present a systematic approach to characterize variability in tumor boundaries that utilizes equivalence tests as a means to determine whether a tumor volume has significantly changed over time. To demonstrate these concepts, 32 MRI studies from 8 patients were segmented using four different approaches (statistical classifier, region-based, edge-based, knowledge-based) to generate different regions of interest representing tumor extent. We showed that across all studies, the average Dice coefficient for the superset of the different methods was 0.754 (95% confidence interval 0.701-0.808) when compared to a reference standard. We illustrate how variability obtained by different segmentations can be used to identify significant changes in tumor volume between sequential time points. Our study demonstrates that variability is an inherent part of interpreting tumor segmentation results and should be considered as part of the interpretation process.
Symbolic Processing Combined with Model-Based Reasoning
NASA Technical Reports Server (NTRS)
James, Mark
2009-01-01
A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.
NASA Astrophysics Data System (ADS)
Rahim, Kartini Abdul; Kahar, Rosmila Abdul; Khalid, Halimi Mohd.; Salleh, Rohayu Mohd; Hashim, Rathiah
2015-05-01
Recognition of Arabic handwritten and its variants such as Farsi (Persian) and Urdu had been receiving considerable attention in recent years. Being contrast to Arabic handwritten, Jawi, as a second method of Malay handwritten, has not been studied yet, but if any, there were a few references on it. The recent transformation in Malaysian education, the Special Education is one of the priorities in the Malaysia Blueprint. One of the special needs quoted in Malaysia education is dyslexia. A dyslexic student is considered as student with learning disability. Concluding a student is truly dyslexia might be incorrect for they were only assessed through Roman alphabet, without considering assessment via Jawi handwriting. A study was conducted on dyslexic students attending a special class for dyslexia in Malay Language to determine whether they are also dyslexia in Jawi handwriting. The focus of the study is to test the copying skills in relation to word reading and writing in Malay Language with and without dyslexia through both characters. A total of 10 dyslexic children and 10 normal children were recruited. In conclusion for future study, dyslexic students have less difficulty in performing Jawi handwriting in Malay Language through statistical analysis.
Inverse Optimization: A New Perspective on the Black-Litterman Model
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.
2014-01-01
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru
2017-01-16
A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.
Manca, Andrea; Hawkins, Neil; Sculpher, Mark J
2005-05-01
In trial-based cost-effectiveness analysis baseline mean utility values are invariably imbalanced between treatment arms. A patient's baseline utility is likely to be highly correlated with their quality-adjusted life-years (QALYs) over the follow-up period, not least because it typically contributes to the QALY calculation. Therefore, imbalance in baseline utility needs to be accounted for in the estimation of mean differential QALYs, and failure to control for this imbalance can result in a misleading incremental cost-effectiveness ratio. This paper discusses the approaches that have been used in the cost-effectiveness literature to estimate absolute and differential mean QALYs alongside randomised trials, and illustrates the implications of baseline mean utility imbalance for QALY calculation. Using data from a recently conducted trial-based cost-effectiveness study and a micro-simulation exercise, the relative performance of alternative estimators is compared, showing that widely used methods to calculate differential QALYs provide incorrect results in the presence of baseline mean utility imbalance regardless of whether these differences are formally statistically significant. It is demonstrated that multiple regression methods can be usefully applied to generate appropriate estimates of differential mean QALYs and an associated measure of sampling variability, while controlling for differences in baseline mean utility between treatment arms in the trial. Copyright 2004 John Wiley & Sons, Ltd
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... contained an incorrect hull number. DATES: This correcting amendment is effective March 8, 2010, and is... an amendment to Sec. 706.2 Table Five, an incorrect hull number for the USS PHILIPPINE SEA was...
NASA Astrophysics Data System (ADS)
Landry, Brian R.; Subotnik, Joseph E.
2011-11-01
We evaluate the accuracy of Tully's surface hopping algorithm for the spin-boson model for the case of a small diabatic coupling parameter (V). We calculate the transition rates between diabatic surfaces, and we compare our results to the expected Marcus rates. We show that standard surface hopping yields an incorrect scaling with diabatic coupling (linear in V), which we demonstrate is due to an incorrect treatment of decoherence. By modifying standard surface hopping to include decoherence events, we recover the correct scaling (˜V2).
After 400 years, Some of Us Still Get it Wrong: Science Errors on TV (A Personal Experience)
NASA Astrophysics Data System (ADS)
Comins, Neil
2009-10-01
Students harbor misconceptions (deep-seated incorrect beliefs) about our astronomical environment. While some of these beliefs have their origins in faulty reasoning, many others come from external sources, including from teachers and even professional scientists believing and sharing misconceptions, and from media sources. I will relate a horror story of how scientists on a TV show about the Moon in which I appeared presented incorrect information and how the producers also manipulated the science presented by the narrator to provide incorrect information that viewers are likely to incorporate into their own belief systems.
NASA Astrophysics Data System (ADS)
Li, Zifeng
2016-12-01
This paper analyzes the mechanical and mathematical models in "Ritto et al. (2013) [1]". The results are that: (1) the mechanical model is obviously incorrect; (2) the mathematical model is not complete; (3) the differential equation is obviously incorrect; (4) the finite element equation is obviously not discretized from the corresponding mathematical model above, and is obviously incorrect. A mathematical model of dynamics should include the differential equations, the boundary conditions and the initial conditions.
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
2017-06-21
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
Sharpen Your Skills: Mathematics and Science Braille.
ERIC Educational Resources Information Center
Eulert, Von E.; Cohn, Doris
1984-01-01
Three articles about mathematics and science braille are provided for braille transcribers and teachers of the visually handicapped. The first article discusses common problems such as setting braille writers incorrectly, duplicating transcribed materials unnecessarily, and incorrectly transcribing from typescript. The second article provides a…
Ha Dinh, Thi Thuy; Bonner, Ann; Clark, Robyn; Ramsbotham, Joanne; Hines, Sonia
2016-01-01
Chronic diseases are increasing worldwide and have become a significant burden to those affected by those diseases. Disease-specific education programs have demonstrated improved outcomes, although people do forget information quickly or memorize it incorrectly. The teach-back method was introduced in an attempt to reinforce education to patients. To date, the evidence regarding the effectiveness of health education employing the teach-back method in improved care has not yet been reviewed systematically. This systematic review examined the evidence on using the teach-back method in health education programs for improving adherence and self-management of people with chronic disease. Adults aged 18 years and over with one or more than one chronic disease.All types of interventions which included the teach-back method in an education program for people with chronic diseases. The comparator was chronic disease education programs that did not involve the teach-back method.Randomized and non-randomized controlled trials, cohort studies, before-after studies and case-control studies.The outcomes of interest were adherence, self-management, disease-specific knowledge, readmission, knowledge retention, self-efficacy and quality of life. Searches were conducted in CINAHL, MEDLINE, EMBASE, Cochrane CENTRAL, Web of Science, ProQuest Nursing and Allied Health Source, and Google Scholar databases. Search terms were combined by AND or OR in search strings. Reference lists of included articles were also searched for further potential references. Two reviewers conducted quality appraisal of papers using the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument. Data were extracted using the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument data extraction instruments. There was significant heterogeneity in selected studies, hence a meta-analysis was not possible and the results were presented in narrative form. Of the 21 articles retrieved in full, 12 on the use of the teach-back method met the inclusion criteria and were selected for analysis. Four studies confirmed improved disease-specific knowledge in intervention participants. One study showed a statistically significant improvement in adherence to medication and diet among type 2 diabetics patients in the intervention group compared to the control group (p < 0.001). Two studies found statistically significant improvements in self-efficacy (p = 0.0026 and p < 0.001) in the intervention groups. One study examined quality of life in heart failure patients but the results did not improve from the intervention (p = 0.59). Five studies found a reduction in readmission rates and hospitalization but these were not always statistically significant. Two studies showed improvement in daily weighing among heart failure participants, and in adherence to diet, exercise and foot care among those with type 2 diabetes. Overall, the teach-back method showed positive effects in a wide range of health care outcomes although these were not always statistically significant. Studies in this systematic review revealed improved outcomes in disease-specific knowledge, adherence, self-efficacy and the inhaler technique. There was a positive but inconsistent trend also seen in improved self-care and reduction of hospital readmission rates. There was limited evidence on improvement in quality of life or disease related knowledge retention.Evidence from the systematic review supports the use of the teach-back method in educating people with chronic disease to maximize their disease understanding and promote knowledge, adherence, self-efficacy and self-care skills.Future studies are required to strengthen the evidence on effects of the teach-back method. Larger randomized controlled trials will be needed to determine the effectiveness of the teach-back method in quality of life, reduction of readmission, and hospitalizations.
The Essential Genome of Escherichia coli K-12.
Goodall, Emily C A; Robinson, Ashley; Johnston, Iain G; Jabbari, Sara; Turner, Keith A; Cunningham, Adam F; Lund, Peter A; Cole, Jeffrey A; Henderson, Ian R
2018-02-20
Transposon-directed insertion site sequencing (TraDIS) is a high-throughput method coupling transposon mutagenesis with short-fragment DNA sequencing. It is commonly used to identify essential genes. Single gene deletion libraries are considered the gold standard for identifying essential genes. Currently, the TraDIS method has not been benchmarked against such libraries, and therefore, it remains unclear whether the two methodologies are comparable. To address this, a high-density transposon library was constructed in Escherichia coli K-12. Essential genes predicted from sequencing of this library were compared to existing essential gene databases. To decrease false-positive identification of essential genes, statistical data analysis included corrections for both gene length and genome length. Through this analysis, new essential genes and genes previously incorrectly designated essential were identified. We show that manual analysis of TraDIS data reveals novel features that would not have been detected by statistical analysis alone. Examples include short essential regions within genes, orientation-dependent effects, and fine-resolution identification of genome and protein features. Recognition of these insertion profiles in transposon mutagenesis data sets will assist genome annotation of less well characterized genomes and provides new insights into bacterial physiology and biochemistry. IMPORTANCE Incentives to define lists of genes that are essential for bacterial survival include the identification of potential targets for antibacterial drug development, genes required for rapid growth for exploitation in biotechnology, and discovery of new biochemical pathways. To identify essential genes in Escherichia coli , we constructed a transposon mutant library of unprecedented density. Initial automated analysis of the resulting data revealed many discrepancies compared to the literature. We now report more extensive statistical analysis supported by both literature searches and detailed inspection of high-density TraDIS sequencing data for each putative essential gene for the E. coli model laboratory organism. This paper is important because it provides a better understanding of the essential genes of E. coli , reveals the limitations of relying on automated analysis alone, and provides a new standard for the analysis of TraDIS data. Copyright © 2018 Goodall et al.
NASA Astrophysics Data System (ADS)
Meier, Walter Neil
This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.
A Comparison of Phasing Algorithms for Trios and Unrelated Individuals
Marchini, Jonathan; Cutler, David; Patterson, Nick; Stephens, Matthew; Eskin, Eleazar; Halperin, Eran; Lin, Shin; Qin, Zhaohui S.; Munro, Heather M.; Abecasis, Gonçalo R.; Donnelly, Peter
2006-01-01
Knowledge of haplotype phase is valuable for many analysis methods in the study of disease, population, and evolutionary genetics. Considerable research effort has been devoted to the development of statistical and computational methods that infer haplotype phase from genotype data. Although a substantial number of such methods have been developed, they have focused principally on inference from unrelated individuals, and comparisons between methods have been rather limited. Here, we describe the extension of five leading algorithms for phase inference for handling father-mother-child trios. We performed a comprehensive assessment of the methods applied to both trios and to unrelated individuals, with a focus on genomic-scale problems, using both simulated data and data from the HapMap project. The most accurate algorithm was PHASE (v2.1). For this method, the percentages of genotypes whose phase was incorrectly inferred were 0.12%, 0.05%, and 0.16% for trios from simulated data, HapMap Centre d'Etude du Polymorphisme Humain (CEPH) trios, and HapMap Yoruban trios, respectively, and 5.2% and 5.9% for unrelated individuals in simulated data and the HapMap CEPH data, respectively. The other methods considered in this work had comparable but slightly worse error rates. The error rates for trios are similar to the levels of genotyping error and missing data expected. We thus conclude that all the methods considered will provide highly accurate estimates of haplotypes when applied to trio data sets. Running times differ substantially between methods. Although it is one of the slowest methods, PHASE (v2.1) was used to infer haplotypes for the 1 million–SNP HapMap data set. Finally, we evaluated methods of estimating the value of r2 between a pair of SNPs and concluded that all methods estimated r2 well when the estimated value was ⩾0.8. PMID:16465620
NASA Astrophysics Data System (ADS)
Chen, Po-Hao; Botzolakis, Emmanuel; Mohan, Suyash; Bryan, R. N.; Cook, Tessa
2016-03-01
In radiology, diagnostic errors occur either through the failure of detection or incorrect interpretation. Errors are estimated to occur in 30-35% of all exams and contribute to 40-54% of medical malpractice litigations. In this work, we focus on reducing incorrect interpretation of known imaging features. Existing literature categorizes cognitive bias leading a radiologist to an incorrect diagnosis despite having correctly recognized the abnormal imaging features: anchoring bias, framing effect, availability bias, and premature closure. Computational methods make a unique contribution, as they do not exhibit the same cognitive biases as a human. Bayesian networks formalize the diagnostic process. They modify pre-test diagnostic probabilities using clinical and imaging features, arriving at a post-test probability for each possible diagnosis. To translate Bayesian networks to clinical practice, we implemented an entirely web-based open-source software tool. In this tool, the radiologist first selects a network of choice (e.g. basal ganglia). Then, large, clearly labeled buttons displaying salient imaging features are displayed on the screen serving both as a checklist and for input. As the radiologist inputs the value of an extracted imaging feature, the conditional probabilities of each possible diagnosis are updated. The software presents its level of diagnostic discrimination using a Pareto distribution chart, updated with each additional imaging feature. Active collaboration with the clinical radiologist is a feasible approach to software design and leads to design decisions closely coupling the complex mathematics of conditional probability in Bayesian networks with practice.
Proverbio, Alice Mado; Crotti, Nicola; Manfredi, Mirella; Adorni, Roberta; Zani, Alberto
2012-01-01
While the existence of a mirror neuron system (MNS) representing and mirroring simple purposeful actions (such as reaching) is known, neural mechanisms underlying the representation of complex actions (such as ballet, fencing, etc.) that are learned by imitation and exercise are not well understood. In this study, correct and incorrect basketball actions were visually presented to professional basketball players and naïve viewers while their EEG was recorded. The participants had to respond to rare targets (unanimated scenes). No category or group differences were found at perceptual level, ruling out the possibility that correct actions might be more visually familiar. Large, anterior N400 responses of event-related brain potentials to incorrectly performed basketball actions were recorded in skilled brains only. The swLORETA inverse solution for incorrect–correct contrast showed that the automatic detection of action ineffectiveness/incorrectness involved the fronto/parietal MNS, the cerebellum, the extra-striate body area, and the superior temporal sulcus. PMID:23181191
Automatic processing of pragmatic information in the human brain: a mismatch negativity study.
Zhao, Ming; Liu, Tao; Chen, Feiyan
2018-05-23
Language comprehension involves pragmatic information processing, which allows world knowledge to influence the interpretation of a sentence. This study explored whether pragmatic information can be automatically processed during spoken sentence comprehension. The experiment adopted the mismatch negativity (MMN) paradigm to capture the neurophysiological indicators of automatic processing of spoken sentences. Pragmatically incorrect ('Foxes have wings') and correct ('Butterflies have wings') sentences were used as the experimental stimuli. In condition 1, the pragmatically correct sentence was the deviant and the pragmatically incorrect sentence was the standard stimulus, whereas the opposite case was presented in condition 2. The experimental results showed that, compared with the condition that the pragmatically correct sentence is the deviant stimulus, when the condition that the pragmatically incorrect sentence is the deviant stimulus MMN effects were induced within 60-120 and 220-260 ms. The results indicated that the human brain can monitor for incorrect pragmatic information in the inattentive state and can automatically process pragmatic information at the beginning of spoken sentence comprehension.
Odegard, Timothy N; Koen, Joshua D
2007-11-01
Both positive and negative testing effects have been demonstrated with a variety of materials and paradigms (Roediger & Karpicke, 2006b). The present series of experiments replicate and extend the research of Roediger and Marsh (2005) with the addition of a "none-of-the-above" response option. Participants (n=32 in both experiments) read a set of passages, took an initial multiple-choice test, completed a filler task, and then completed a final cued-recall test (Experiment 1) or multiple-choice test (Experiment 2). Questions were manipulated on the initial multiple-choice test by adding a "none-of-the-above" response alternative (choice "E") that was incorrect ("E" Incorrect) or correct ("E" Correct). The results from both experiments demonstrated that the positive testing effect was negated when the "none-of-the-above" alternative was the correct response on the initial multiple-choice test, but was still present when the "none-of-the-above" alternative was an incorrect response.
Pitchford, Melanie; Ball, Linden J.; Hunt, Thomas E.; Steel, Richard
2017-01-01
We report a study examining the role of ‘cognitive miserliness’ as a determinant of poor performance on the standard three-item Cognitive Reflection Test (CRT). The cognitive miserliness hypothesis proposes that people often respond incorrectly on CRT items because of an unwillingness to go beyond default, heuristic processing and invest time and effort in analytic, reflective processing. Our analysis (N = 391) focused on people’s response times to CRT items to determine whether predicted associations are evident between miserly thinking and the generation of incorrect, intuitive answers. Evidence indicated only a weak correlation between CRT response times and accuracy. Item-level analyses also failed to demonstrate predicted response-time differences between correct analytic and incorrect intuitive answers for two of the three CRT items. We question whether participants who give incorrect intuitive answers on the CRT can legitimately be termed cognitive misers and whether the three CRT items measure the same general construct. PMID:29099840
Yigzaw, Kassaye Yitbarek; Michalas, Antonis; Bellika, Johan Gustav
2017-01-03
Techniques have been developed to compute statistics on distributed datasets without revealing private information except the statistical results. However, duplicate records in a distributed dataset may lead to incorrect statistical results. Therefore, to increase the accuracy of the statistical analysis of a distributed dataset, secure deduplication is an important preprocessing step. We designed a secure protocol for the deduplication of horizontally partitioned datasets with deterministic record linkage algorithms. We provided a formal security analysis of the protocol in the presence of semi-honest adversaries. The protocol was implemented and deployed across three microbiology laboratories located in Norway, and we ran experiments on the datasets in which the number of records for each laboratory varied. Experiments were also performed on simulated microbiology datasets and data custodians connected through a local area network. The security analysis demonstrated that the protocol protects the privacy of individuals and data custodians under a semi-honest adversarial model. More precisely, the protocol remains secure with the collusion of up to N - 2 corrupt data custodians. The total runtime for the protocol scales linearly with the addition of data custodians and records. One million simulated records distributed across 20 data custodians were deduplicated within 45 s. The experimental results showed that the protocol is more efficient and scalable than previous protocols for the same problem. The proposed deduplication protocol is efficient and scalable for practical uses while protecting the privacy of patients and data custodians.
An empirical comparison of several recent epistatic interaction detection methods.
Wang, Yue; Liu, Guimei; Feng, Mengling; Wong, Limsoon
2011-11-01
Many new methods have recently been proposed for detecting epistatic interactions in GWAS data. There is, however, no in-depth independent comparison of these methods yet. Five recent methods-TEAM, BOOST, SNPHarvester, SNPRuler and Screen and Clean (SC)-are evaluated here in terms of power, type-1 error rate, scalability and completeness. In terms of power, TEAM performs best on data with main effect and BOOST performs best on data without main effect. In terms of type-1 error rate, TEAM and BOOST have higher type-1 error rates than SNPRuler and SNPHarvester. SC does not control type-1 error rate well. In terms of scalability, we tested the five methods using a dataset with 100 000 SNPs on a 64 bit Ubuntu system, with Intel (R) Xeon(R) CPU 2.66 GHz, 16 GB memory. TEAM takes ~36 days to finish and SNPRuler reports heap allocation problems. BOOST scales up to 100 000 SNPs and the cost is much lower than that of TEAM. SC and SNPHarvester are the most scalable. In terms of completeness, we study how frequently the pruning techniques employed by these methods incorrectly prune away the most significant epistatic interactions. We find that, on average, 20% of datasets without main effect and 60% of datasets with main effect are pruned incorrectly by BOOST, SNPRuler and SNPHarvester. The software for the five methods tested are available from the URLs below. TEAM: http://csbio.unc.edu/epistasis/download.php BOOST: http://ihome.ust.hk/~eeyang/papers.html. SNPHarvester: http://bioinformatics.ust.hk/SNPHarvester.html. SNPRuler: http://bioinformatics.ust.hk/SNPRuler.zip. Screen and Clean: http://wpicr.wpic.pitt.edu/WPICCompGen/. wangyue@nus.edu.sg.
van Schaick, Willem; van Dooren, Bart T H; Mulder, Paul G H; Völker-Dieben, Hennie J M
2005-07-01
To report on the calibration of the Topcon SP-2000P specular microscope and the Endothelial Cell Analysis Module of the IMAGEnet 2000 software, and to establish the validity of the different endothelial cell density (ECD) assessment methods available in these instruments. Using an external microgrid, we calibrated the magnification of the SP-2000P and the IMAGEnet software. In both eyes of 36 volunteers, we validated 4 ECD assessment methods by comparing these methods to the gold standard manual ECD, manual counting of cells on a video print. These methods were: the estimated ECD, estimation of ECD with a reference grid on the camera screen; the SP-2000P ECD, pointing out whole contiguous cells on the camera screen; the uncorrected IMAGEnet ECD, using automatically drawn cell borders, and the corrected IMAGEnet ECD, with manual correction of incorrectly drawn cell borders in the automated analysis. Validity of each method was evaluated by calculating both the mean difference with the manual ECD and the limits of agreement as described by Bland and Altman. Preset factory values of magnification were incorrect, resulting in errors in ECD of up to 9%. All assessments except 1 of the estimated ECDs differed significantly from manual ECDs, with most differences being similar (< or =6.5%), except for uncorrected IMAGEnet ECD (30.2%). Corrected IMAGEnet ECD showed the narrowest limits of agreement (-4.9 to +19.3%). We advise checking the calibration of magnification in any specular microscope or endothelial analysis software as it may be erroneous. Corrected IMAGEnet ECD is the most valid of the investigated methods in the Topcon SP-2000P/IMAGEnet 2000 combination.
Dunn, Timothy C; Hayter, Gary A; Doniger, Ken J; Wolpert, Howard A
2014-07-01
The objective was to develop an analysis methodology for generating diabetes therapy decision guidance using continuous glucose (CG) data. The novel Likelihood of Low Glucose (LLG) methodology, which exploits the relationship between glucose median, glucose variability, and hypoglycemia risk, is mathematically based and can be implemented in computer software. Using JDRF Continuous Glucose Monitoring Clinical Trial data, CG values for all participants were divided into 4-week periods starting at the first available sensor reading. The safety and sensitivity performance regarding hypoglycemia guidance "stoplights" were compared between the LLG method and one based on 10th percentile (P10) values. Examining 13 932 hypoglycemia guidance outputs, the safety performance of the LLG method ranged from 0.5% to 5.4% incorrect "green" indicators, compared with 0.9% to 6.0% for P10 value of 110 mg/dL. Guidance with lower P10 values yielded higher rates of incorrect indicators, such as 11.7% to 38% at 80 mg/dL. When evaluated only for periods of higher glucose (median above 155 mg/dL), the safety performance of the LLG method was superior to the P10 method. Sensitivity performance of correct "red" indicators of the LLG method had an in sample rate of 88.3% and an out of sample rate of 59.6%, comparable with the P10 method up to about 80 mg/dL. To aid in therapeutic decision making, we developed an algorithm-supported report that graphically highlights low glucose risk and increased variability. When tested with clinical data, the proposed method demonstrated equivalent or superior safety and sensitivity performance. © 2014 Diabetes Technology Society.
Contraceptive failure in the United States
Trussell, James
2013-01-01
This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680
Rate of Unverifiable Publications Among Ophthalmology Residency Applicants Invited to Interview.
Tamez, Heather M; Tauscher, Robert; Brown, Eric N; Wayman, Laura; Mawn, Louise A
2018-04-19
Unverifiable publications in applications for ophthalmology residencies could be a serious concern if they represent publication dishonesty. To determine the rate of unverifiable publications among applicants offered an interview. Retrospective review of 322 ophthalmology residency applications for entering classes 2012 to 2017 at Vanderbilt University School of Medicine, Nashville, Tennessee. Full-length publications reported in the applications were searched in PubMed, Google, Google Scholar, and directly on the journal's website. Applications were deemed unverifiable if there was no record of the publication by any of these means or if substantial discrepancies existed, such as incorrect authorship, incorrect journal, or a meaningful discrepancy in title or length (full-length article vs abstract). Inability to locate publication with search, incorrect author position, applicant not listed as an author, article being an abstract and not a published paper, substantial title discrepancy suggesting an alternative project, and incorrect journal. Of the 322 applicants offered interviews during the 6-year study period, 22 (6.8%) had 24 unverifiable publications. Two hundred thirty-nine of these applicants (74.2%) reported at least 1 qualifying publication; of this group, 22 (9.2%) had an unverifiable publication. The applications with unverifiable publications were evenly distributed across the years of the study (range, 2-6 per cycle; Pearson χ25 = 3.65; P = .60). Two applicants had 2 unverifiable publications each. Two of the 22 applicants (9.1%) with unverifiable publications were graduates of medical schools outside the United States. Among the unverifiable publications, the most common reason was inability to locate the publication (13 [54%]). Additional issues included abstract rather than full-length publication (5 [20.8%]), incorrect author position (4 [16.7%]), applicant not listed as an author on the publication (1 [4.2%]), and substantial title discrepancy (1 [4.2%]). One listed publication had an incorrect author position and incorrect journal (1 [4.2%]). Unverifiable publications among ophthalmology residency applicants is a persistent problem. Possible strategies to modify the review process include asking applicants to provide copies of their full-length works or the relevant PMCID (PubMed Central reference number) or DOI (digital object identifier) for their publications.
ERIC Educational Resources Information Center
Bani-Salameh, Hisham N.
2017-01-01
We started this work with the goal of detecting misconceptions held by our students about force and motion. A total of 341 students participated in this study by taking the force concept inventory (FCI) test both before and after receiving instructions about force or motion. The data from this study were analysed using different statistical…
Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol
2015-09-02
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.
Kissling, Grace E.; Haseman, Joseph K.; Zeiger, Errol
2014-01-01
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP’s statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800 × 0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP’s decision making process, overstates the number of statistical comparisons made, and ignores that fact that that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus’ conclusion that such obvious responses merely “generate a hypothesis” rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. PMID:25261588
Comment on 'Semitransparency effects in the moving mirror model for Hawking radiation'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizalde, Emilio; Haro, Jaume
2010-06-15
Particle production by a semitransparent mirror accelerating on trajectories which simulate the Hawking effect was recently discussed in 3. This author points out that some results in 1 are incorrect. We show here that, contrary to statements therein, the main results and conclusions of the last paper remain valid, only Eq. (41) there and some particular implication are not. The misunderstanding actually comes from comparing two very different parameter regions, and from the fact that, in our work, the word statistics was used in an unusual way related to the sign of the {beta}-Bogoliubov coefficient, and not with its ordinarymore » meaning, connected with the number of particles emitted per mode.« less
Quantification of Stereochemical Communication in Metal-Organic Assemblies.
Castilla, Ana M; Miller, Mark A; Nitschke, Jonathan R; Smulders, Maarten M J
2016-08-26
The derivation and application of a statistical mechanical model to quantify stereochemical communication in metal-organic assemblies is reported. The factors affecting the stereochemical communication within and between the metal stereocenters of the assemblies were experimentally studied by optical spectroscopy and analyzed in terms of a free energy penalty per "incorrect" amine enantiomer incorporated, and a free energy of coupling between stereocenters. These intra- and inter-vertex coupling constants are used to track the degree of stereochemical communication across a range of metal-organic assemblies (employing different ligands, peripheral amines, and metals); temperature-dependent equilibria between diastereomeric cages are also quantified. The model thus provides a unified understanding of the factors that shape the chirotopic void spaces enclosed by metal-organic container molecules.
Noise reduction of a composite cylinder subjected to random acoustic excitation
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Beyer, T.
1989-01-01
Interior and exterior noise measurements were conducted on a stiffened composite floor-equipped cylinder, with and without an interior trim installed. Noise reduction was obtained for the case of random acoustic excitation in a diffuse field; the frequency range of interest was 100-800-Hz one-third octave bands. The measured data were compared with noise reduction predictions from the Propeller Aircraft Interior Noise (PAIN) program and from a statistical energy analysis. Structural model parameters were not predicted well by the PAIN program for the given input parameters; this resulted in incorrect noise reduction predictions for the lower one-third octave bands where the power flow into the interior of the cylinder was predicted on a mode-per-mode basis.
State recovery and lockstep execution restart in a system with multiprocessor pairing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gara, Alan; Gschwind, Michael K; Salapura, Valentina
System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less
Author Correction: Global patterns in mangrove soil carbon stocks and losses
NASA Astrophysics Data System (ADS)
Atwood, Trisha B.; Connolly, Rod M.; Almahasheer, Hanan; Carnell, Paul E.; Duarte, Carlos M.; Lewis, Carolyn J. Ewers; Irigoien, Xabier; Kelleway, Jeffrey J.; Lavery, Paul S.; Macreadie, Peter I.; Serrano, Oscar; Sanders, Christian J.; Santos, Isaac; Steven, Andrew D. L.; Lovelock, Catherine E.
2018-03-01
In the version of this Article originally published, the potential carbon loss from soils as a result of mangrove deforestation was incorrectly given as `2.0-75 Tg C yr-1'; this should have read `2-8 Tg C yr-1'. The corresponding emissions were incorrectly given as ` 7.3-275 Tg of CO2e'; this should have read ` 7-29 Tg of CO2e'. The corresponding percentage equivalent of these emissions compared with those from global terrestrial deforestation was incorrectly given as `0.2-6%'; this should have read `0.6-2.4%'. These errors have now been corrected in all versions of the Article.
MGAS: a powerful tool for multivariate gene-based genome-wide association analysis.
Van der Sluis, Sophie; Dolan, Conor V; Li, Jiang; Song, Youqiang; Sham, Pak; Posthuma, Danielle; Li, Miao-Xin
2015-04-01
Standard genome-wide association studies, testing the association between one phenotype and a large number of single nucleotide polymorphisms (SNPs), are limited in two ways: (i) traits are often multivariate, and analysis of composite scores entails loss in statistical power and (ii) gene-based analyses may be preferred, e.g. to decrease the multiple testing problem. Here we present a new method, multivariate gene-based association test by extended Simes procedure (MGAS), that allows gene-based testing of multivariate phenotypes in unrelated individuals. Through extensive simulation, we show that under most trait-generating genotype-phenotype models MGAS has superior statistical power to detect associated genes compared with gene-based analyses of univariate phenotypic composite scores (i.e. GATES, multiple regression), and multivariate analysis of variance (MANOVA). Re-analysis of metabolic data revealed 32 False Discovery Rate controlled genome-wide significant genes, and 12 regions harboring multiple genes; of these 44 regions, 30 were not reported in the original analysis. MGAS allows researchers to conduct their multivariate gene-based analyses efficiently, and without the loss of power that is often associated with an incorrectly specified genotype-phenotype models. MGAS is freely available in KGG v3.0 (http://statgenpro.psychiatry.hku.hk/limx/kgg/download.php). Access to the metabolic dataset can be requested at dbGaP (https://dbgap.ncbi.nlm.nih.gov/). The R-simulation code is available from http://ctglab.nl/people/sophie_van_der_sluis. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Fiedler, Sabine; Illich, Bernhard; Berger, Jochen; Graw, Matthias
2009-07-01
Ground-penetration radar (GPR) is a geophysical method that is commonly used in archaeological and forensic investigations, including the determination of the exact location of graves. Whilst the method is rapid and does not involve disturbance of the graves, the interpretation of GPR profiles is nevertheless difficult and often leads to incorrect results. Incorrect identifications could hinder criminal investigations and complicate burials in cemeteries that have no information on the location of previously existing graves. In order to increase the number of unmarked graves that are identified, the GPR results need to be verified by comparing them with the soil and vegetation properties of the sites examined. We used a modern cemetery to assess the results obtained with GPR which we then compared with previously obtained tachymetric data and with an excavation of the graves where doubt existed. Certain soil conditions tended to make the application of GPR difficult on occasions, but a rough estimation of the location of the graves was always possible. The two different methods, GPR survey and tachymetry, both proved suitable for correctly determining the exact location of the majority of graves. The present study thus shows that GPR is a reliable method for determining the exact location of unmarked graves in modern cemeteries. However, the method did not allow statements to be made on the stage of decay of the bodies. Such information would assist in deciding what should be done with graves where ineffective degradation creates a problem for reusing graves following the standard resting time of 25 years.
Luskin, Matthew Scott; Albert, Wido Rizki; Tobler, Mathias W
2018-02-12
In the original version of the Article, reference 18 was incorrectly numbered as reference 30, and references 19 to 30 were incorrectly numbered as 18 to 29. These errors have now been corrected in the PDF and HTML versions of the manuscript.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., reports, and records: Falsification, reproduction, or alteration; incorrect statements. 67.403 Section 67..., logbooks, reports, and records: Falsification, reproduction, or alteration; incorrect statements. (a) No... reproduction, for fraudulent purposes, of any medical certificate under this part; or (4) An alteration of any...
Code of Federal Regulations, 2011 CFR
2011-01-01
..., reports, and records: Falsification, reproduction, or alteration; incorrect statements. 67.403 Section 67..., logbooks, reports, and records: Falsification, reproduction, or alteration; incorrect statements. (a) No... reproduction, for fraudulent purposes, of any medical certificate under this part; or (4) An alteration of any...
Evidence of codon usage in the nearest neighbor spacing distribution of bases in bacterial genomes
NASA Astrophysics Data System (ADS)
Higareda, M. F.; Geiger, O.; Mendoza, L.; Méndez-Sánchez, R. A.
2012-02-01
Statistical analysis of whole genomic sequences usually assumes a homogeneous nucleotide density throughout the genome, an assumption that has been proved incorrect for several organisms since the nucleotide density is only locally homogeneous. To avoid giving a single numerical value to this variable property, we propose the use of spectral statistics, which characterizes the density of nucleotides as a function of its position in the genome. We show that the cumulative density of bases in bacterial genomes can be separated into an average (or secular) plus a fluctuating part. Bacterial genomes can be divided into two groups according to the qualitative description of their secular part: linear and piecewise linear. These two groups of genomes show different properties when their nucleotide spacing distribution is studied. In order to analyze genomes having a variable nucleotide density, statistically, the use of unfolding is necessary, i.e., to get a separation between the secular part and the fluctuations. The unfolding allows an adequate comparison with the statistical properties of other genomes. With this methodology, four genomes were analyzed Burkholderia, Bacillus, Clostridium and Corynebacterium. Interestingly, the nearest neighbor spacing distributions or detrended distance distributions are very similar for species within the same genus but they are very different for species from different genera. This difference can be attributed to the difference in the codon usage.
Mitchell, Jonathan S.; Chang, Jonathan
2017-01-01
Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM’s likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA’s numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that “unobserved rate shifts” appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${\\sim}$\\end{document}20% of simulated trees in MEA’s data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth–death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for phylogenetic trees, and we expect BAMM and other modeling frameworks to improve in response to mathematical and computational innovations. However, we remain optimistic that that the imperfect tools currently available to comparative biologists have provided and will continue to provide important insights into the diversification of life on Earth. PMID:28334223
Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan
2017-07-01
Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for phylogenetic trees, and we expect BAMM and other modeling frameworks to improve in response to mathematical and computational innovations. However, we remain optimistic that that the imperfect tools currently available to comparative biologists have provided and will continue to provide important insights into the diversification of life on Earth. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Li, Xing-wang; Wang, Di; Greenhalgh, Stewart
2017-12-01
Earthquake hypocenter determination and traveltime tomography with local earthquake data are normally conducted using a Cartesian coordinate system and assuming a flat Earth model, but for regional and teleseismic data Earth curvature is incorporated and a spherical coordinate system employed. However, when the study region is from the local to near-regional scale (1°-4°), it is unclear what coordinate system to use and what kind of incorrect anomalies or location errors might arise when using the Cartesian coordinate frame. In this paper we investigate in a quantitative sense through two near-regional crustal models and five different inversion methods, the hypocenter errors, reflector perturbation and incorrect velocity anomalies that can arise due to the selection of the wrong coordinate system and inversion method. The simulated inversion results show that the computed traveltime errors are larger than 0.1 s when the epicentral distance exceeds 150 km, and increases linearly with increasing epicentral distance. Such predicted traveltime errors will result in different patterns of incorrect velocity anomalous structures, a perturbed Moho interface for traveltime tomography and source position which deviate for earthquake locations. The maximum magnitude of a velocity image artifact is larger than 1.0% for an epicentral distance of less than 500 km and is up to 0.9% for epicentral distances of less than 300 km. The earthquake source location error is more than 2.0 km for epicentral distances less than 500 km and is up to 1.5 km for epicentral distances less than 300 km. The Moho depth can be in error by up 1.0 km for epicentral distances of less than 500 km but is less than 0.5 km at distances below 300 km. We suggest that spherical coordinate geometry (or time correction) be used whenever there are ray paths at epicentral distances in excess of 150 km.
48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...
48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Applications, logbooks, reports, and records: Fraud, falsification, or incorrect statements. 60.33 Section 60.33 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 2 2013-01-01 2013-01-01 false Applications, logbooks, reports, and records: Fraud, falsification, or incorrect statements. 60.33 Section 60.33 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 2 2012-01-01 2012-01-01 false Applications, logbooks, reports, and records: Fraud, falsification, or incorrect statements. 60.33 Section 60.33 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Applications, logbooks, reports, and records: Fraud, falsification, or incorrect statements. 60.33 Section 60.33 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 2 2014-01-01 2014-01-01 false Applications, logbooks, reports, and records: Fraud, falsification, or incorrect statements. 60.33 Section 60.33 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE...
ERIC Educational Resources Information Center
Ryder, Nuala; Leinonen, Eeva
2014-01-01
This study focused on young children's incorrect answers to pragmatically demanding questions. Children with specific language impairment (SLI), including a subgroup with pragmatic language difficulties (PLD) and typically developing children answered questions targeting implicatures, based on a storybook and short verbal scenarios.…
Code of Federal Regulations, 2010 CFR
2010-04-01
...- ) Overpayments, Underpayments, Waiver of Adjustment or Recovery of Overpayments, and Liability of a Certifying... a provider of services or other person, or an incorrect payment made under section 1814(e) of the... known to be incorrect; or (b) Failure to furnish information which he knew or should have known to be...
42 CFR 405.352 - Adjustment of title XVIII incorrect payments.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Adjustment of title XVIII incorrect payments. 405.352 Section 405.352 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM FEDERAL HEALTH INSURANCE FOR THE AGED AND DISABLED Suspension of Payment...
42 CFR 405.352 - Adjustment of title XVIII incorrect payments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Adjustment of title XVIII incorrect payments. 405.352 Section 405.352 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM FEDERAL HEALTH INSURANCE FOR THE AGED AND DISABLED Suspension of Payment...
78 FR 68360 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-14
... Airworthiness Directives; Rolls-Royce plc Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... turbofan engines. The AD number is incorrect in the Regulatory text. This document corrects that error. In... turbofan engines. As published, the AD number 2013-19-17 under Sec. 39.13 [Amended], is incorrect. No other...
40 CFR 310.24 - What happens if I provide incorrect or false information?
Code of Federal Regulations, 2010 CFR
2010-07-01
... false information? 310.24 Section 310.24 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 310.24 What happens if I provide incorrect or false information? (a) You must not knowingly or recklessly make any statement or provide any information in your reimbursement application that is false...
Evaluating Preference for Graphic Feedback on Correct versus Incorrect Performance
ERIC Educational Resources Information Center
Sigurdsson, Sigurdur O.; Ring, Brandon M.
2013-01-01
The current study evaluated preferences of undergraduate students for graphic feedback on percentage of incorrect performance versus feedback on percentage of correct performance. A total of 108 participants were enrolled in the study and received graphic feedback on performance on 12 online quizzes. One half of participants received graphic…
77 FR 40828 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-11
... certain main landing gear (MLG) upper torque link bolts is reduced significantly due to incorrect fabrication. This proposed AD would require replacing certain MLG upper torque link bolts with a new or... safe life limit on certain MLG upper torque link bolts is reduced significantly due to incorrect...
Computing correct truncated excited state wavefunctions
NASA Astrophysics Data System (ADS)
Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.
2016-12-01
We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.
Net improvement of correct answers to therapy questions after pubmed searches: pre/post comparison.
McKibbon, Kathleen Ann; Lokker, Cynthia; Keepanasseril, Arun; Wilczynski, Nancy L; Haynes, R Brian
2013-11-08
Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers.
Bjerke, Benjamin T; Cheung, Zoe B; Shifflett, Grant D; Iyer, Sravisht; Derman, Peter B; Cunningham, Matthew E
2015-10-01
Shoulder balance for adolescent idiopathic scoliosis (AIS) patients is associated with patient satisfaction and self-image. However, few validated systems exist for selecting the upper instrumented vertebra (UIV) post-surgical shoulder balance. The purpose is to examine the existing UIV selection criteria and correlate with post-surgical shoulder balance in AIS patients. Patients who underwent spinal fusion at age 10-18 years for AIS over a 6-year period were reviewed. All patients with a minimum of 1-year radiographic follow-up were included. Imbalance was determined to be radiographic shoulder height |RSH| ≥ 15 mm at latest follow-up. Three UIV selection methods were considered: Lenke, Ilharreborde, and Trobisch. A recommended UIV was determined using each method from pre-surgical radiographs. The recommended UIV for each method was compared to the actual UIV instrumented for all three methods; concordance between these levels was defined as "Correct" UIV selection, and discordance was defined as "Incorrect" selection. One hundred seventy-one patients were included with 2.3 ± 1.1 year follow-up. For all methods, "Correct" UIV selection resulted in more shoulder imbalance than "Incorrect" UIV selection. Overall shoulder imbalance incidence was improved from 31.0% (53/171) to 15.2% (26/171). New shoulder imbalance incidence for patients with previously level shoulders was 8.8%. We could not identify a set of UIV selection criteria that accurately predicted post-surgical shoulder balance. Further validated measures are needed in this area. The complexity of proximal thoracic curve correction is underscored in a case example, where shoulder imbalance occurred despite "Correct" UIV selection by all methods.
No Evidence for Activity Correlations in the Radial Velocities of Kapteyn’s Star
NASA Astrophysics Data System (ADS)
Anglada-Escudé, G.; Tuomi, M.; Arriagada, P.; Zechmeister, M.; Jenkins, J. S.; Ofir, A.; Dreizler, S.; Gerlach, E.; Marvin, C. J.; Reiners, A.; Jeffers, S. V.; Butler, R. Paul; Vogt, S. S.; Amado, P. J.; Rodríguez-López, C.; Berdiñas, Z. M.; Morin, J.; Crane, J. D.; Shectman, S. A.; Díaz, M. R.; Sarmiento, L. F.; Jones, H. R. A.
2016-10-01
Stellar activity may induce Doppler variability at the level of a few m s-1 which can then be confused by the Doppler signal of an exoplanet orbiting the star. To first order, linear correlations between radial velocity measurements and activity indices have been proposed to account for any such correlation. The likely presence of two super-Earths orbiting Kapteyn’s star was reported in Anglada-Escudé et al., but this claim was recently challenged by Robertson et al., who argued for evidence of a rotation period (143 days) at three times the orbital period of one of the proposed planets (Kapteyn’s b, P = 48.6 days) and the existence of strong linear correlations between its Doppler signal and activity data. By re-analyzing the data using global statistics and model comparison, we show that such a claim is incorrect given that (1) the choice of a rotation period at 143 days is unjustified, and (2) the presence of linear correlations is not supported by the data. We conclude that the radial velocity signals of Kapteyn’s star remain more simply explained by the presence of two super-Earth candidates orbiting it. We note that analysis of time series of activity indices must be executed with the same care as Doppler time series. We also advocate for the use of global optimization procedures and objective arguments, instead of claims based on residual analyses which are prone to biases and incorrect interpretations.
Coupled-oscillator theory of dispersion and Casimir-Polder interactions.
Berman, P R; Ford, G W; Milonni, P W
2014-10-28
We address the question of the applicability of the argument theorem (of complex variable theory) to the calculation of two distinct energies: (i) the first-order dispersion interaction energy of two separated oscillators, when one of the oscillators is excited initially and (ii) the Casimir-Polder interaction of a ground-state quantum oscillator near a perfectly conducting plane. We show that the argument theorem can be used to obtain the generally accepted equation for the first-order dispersion interaction energy, which is oscillatory and varies as the inverse power of the separation r of the oscillators for separations much greater than an optical wavelength. However, for such separations, the interaction energy cannot be transformed into an integral over the positive imaginary axis. If the argument theorem is used incorrectly to relate the interaction energy to an integral over the positive imaginary axis, the interaction energy is non-oscillatory and varies as r(-4), a result found by several authors. Rather remarkably, this incorrect expression for the dispersion energy actually corresponds to the nonperturbative Casimir-Polder energy for a ground-state quantum oscillator near a perfectly conducting wall, as we show using the so-called "remarkable formula" for the free energy of an oscillator coupled to a heat bath [G. W. Ford, J. T. Lewis, and R. F. O'Connell, Phys. Rev. Lett. 55, 2273 (1985)]. A derivation of that formula from basic results of statistical mechanics and the independent oscillator model of a heat bath is presented.
The USDA quality grades may mislead consumers.
DeVuyst, E A; Lusk, J L; DeVuyst, M A
2014-07-01
This study was designed to explore consumers' perceptions about and knowledge of USDA beef quality grades. Data were collected from over 1,000 consumers in online surveys in November and December 2013, and estimates were weighted to force the sample to mirror the U.S. population in terms of age, gender, education, and region of residence. When asked to rank Prime, Choice, and Select grades in terms of leanness, only 14.4% provided the correct ranking with 57.1% of respondents incorrectly indicating steaks grading Prime were the leanest. Despite perceptions that the Prime name indicated the leanest product, in a subsequent question, 55.6% of respondents thought Prime grade to be the juiciest of the 3 grades. In addition to inquiring about perceptions of the grade names, respondents also indicated perceptions of pictures of steaks. Only 14.5% of respondents correctly matched the steak pictures with their corresponding USDA quality grade name, an outcome that is statistically worse than would have occurred through pure random matching (P = 0.03). When asked to match pictures of steaks with expected prices, 54.8% of respondents incorrectly matched the picture of the Prime steak with the lowest price level. More highly educated consumers with greater preferences for steak consumption were more likely to provide correct answers. Results reveal substantial confusion over quality grading nomenclature and suggest the need for more education or for a transition toward more descriptive terminology at the retail level.
ERIC Educational Resources Information Center
Engelman, Jonathan
2016-01-01
Changing student conceptions in physics is a difficult process and has been a topic of research for many years. The purpose of this study was to understand what prompted students to change or not change their incorrect conceptions of Newtons Second or Third Laws in response to an intervention, Interactive Video Vignettes (IVVs), designed to…
A Guideline to Local Anesthetic Allergy Testing
Canfield, David W.; Gage, Tommy W.
1987-01-01
Patients with a history of adverse reactions to a local anesthetic may often be incorrectly labeled as “allergic.” Determining if a patient is allergic to a local anesthetic is essential in the selection of appropriate pain control techniques. Local anesthetic allergy testing may be performed safely and with reasonable accuracy by a knowledgeable practitioner. This paper presents guidelines for an allergy testing method. ImagesFigure 1 PMID:3318567
ERIC Educational Resources Information Center
le Clercq, Carlijn M. P.; van der Schroeff, Marc P.; Rispens, Judith E.; Ruytjens, Liesbet; Goedegebure, André; van Ingen, Gijs; Franken, Marie-Christine
2017-01-01
Purpose: The purpose of this research note was to validate a simplified version of the Dutch nonword repetition task (NWR; Rispens & Baker, 2012). The NWR was shortened and scoring was transformed to correct/incorrect nonwords, resulting in the shortened NWR (NWR-S). Method: NWR-S and NWR performance were compared in the previously published…
NASA Astrophysics Data System (ADS)
Samulski, Maurice; Karssemeijer, Nico
2008-03-01
Most of the current CAD systems detect suspicious mass regions independently in single views. In this paper we present a method to match corresponding regions in mediolateral oblique (MLO) and craniocaudal (CC) mammographic views of the breast. For every possible combination of mass regions in the MLO view and CC view, a number of features are computed, such as the difference in distance of a region to the nipple, a texture similarity measure, the gray scale correlation and the likelihood of malignancy of both regions computed by single-view analysis. In previous research, Linear Discriminant Analysis was used to discriminate between correct and incorrect links. In this paper we investigate if the performance can be improved by employing a statistical method in which four classes are distinguished. These four classes are defined by the combinations of view (MLO/CC) and pathology (TP/FP) labels. We use distance-weighted k-Nearest Neighbor density estimation to estimate the likelihood of a region combination. Next, a correspondence score is calculated as the likelihood that the region combination is a TP-TP link. The method was tested on 412 cases with a malignant lesion visible in at least one of the views. In 82.4% of the cases a correct link could be established between the TP detections in both views. In future work, we will use the framework presented here to develop a context dependent region matching scheme, which takes the number and likelihood of possible alternatives into account. It is expected that more accurate determination of matching probabilities will lead to improved CAD performance.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
Missing data handling in non-inferiority and equivalence trials: A systematic review.
Rabe, Brooke A; Day, Simon; Fiero, Mallorie H; Bell, Melanie L
2018-05-25
Non-inferiority (NI) and equivalence clinical trials test whether a new treatment is therapeutically no worse than, or equivalent to, an existing standard of care. Missing data in clinical trials have been shown to reduce statistical power and potentially bias estimates of effect size; however, in NI and equivalence trials, they present additional issues. For instance, they may decrease sensitivity to differences between treatment groups and bias toward the alternative hypothesis of NI (or equivalence). Our primary aim was to review the extent of and methods for handling missing data (model-based methods, single imputation, multiple imputation, complete case), the analysis sets used (Intention-To-Treat, Per-Protocol, or both), and whether sensitivity analyses were used to explore departures from assumptions about the missing data. We conducted a systematic review of NI and equivalence trials published between May 2015 and April 2016 by searching the PubMed database. Articles were reviewed primarily by 2 reviewers, with 6 articles reviewed by both reviewers to establish consensus. Of 109 selected articles, 93% reported some missing data in the primary outcome. Among those, 50% reported complete case analysis, and 28% reported single imputation approaches for handling missing data. Only 32% reported conducting analyses of both intention-to-treat and per-protocol populations. Only 11% conducted any sensitivity analyses to test assumptions with respect to missing data. Missing data are common in NI and equivalence trials, and they are often handled by methods which may bias estimates and lead to incorrect conclusions. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.
2017-11-01
The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.
Does Incorrect Guessing Impair Fact Learning?
ERIC Educational Resources Information Center
Kang, Sean H. K.; Pashler, Harold; Cepeda, Nicholas J.; Rohrer, Doug; Carpenter, Shana K.; Mozer, Michael C.
2011-01-01
Taking a test has been shown to produce enhanced retention of the retrieved information. On tests, however, students often encounter questions the answers for which they are unsure. Should they guess anyway, even if they are likely to answer incorrectly? Or are errors engrained, impairing subsequent learning of the correct answer? We sought to…
77 FR 5386 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-03
... paragraph (c) of that AD, is incorrect. Also, the email address provided in paragraphs (i)(1) and (j) of... 90712-4137; phone: (562) 627-5234; fax: (562) 627-5210; email: [email protected] . [[Page 5387..., November 29, 2011), is incorrect. As published, the email address provided in paragraphs (i)(1) and (j) of...
Early Retirement Is Not the Cat's Meow. The Endpaper.
ERIC Educational Resources Information Center
Ferguson, Wayne S.
1982-01-01
Early retirement plans are perceived as being beneficial to school staff and financially advantageous to schools. Four out of the five assumptions on which these perceptions are based are incorrect. The one correct assumption is that early retirement will make affirmative action programs move ahead more rapidly. The incorrect assumptions are: (1)…
42 CFR 405.351 - Incorrect payments for which the individual is not liable.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Incorrect payments for which the individual is not liable. 405.351 Section 405.351 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM FEDERAL HEALTH INSURANCE FOR THE AGED AND DISABLED Suspension...
42 CFR 405.351 - Incorrect payments for which the individual is not liable.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Incorrect payments for which the individual is not liable. 405.351 Section 405.351 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM FEDERAL HEALTH INSURANCE FOR THE AGED AND DISABLED Suspension...
Differences in Visual Attention between Those Who Correctly and Incorrectly Answer Physics Problems
ERIC Educational Resources Information Center
Madsen, Adrian M.; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay
2012-01-01
This study investigated how visual attention differed between those who correctly versus incorrectly answered introductory physics problems. We recorded eye movements of 24 individuals on six different conceptual physics problems where the necessary information to solve the problem was contained in a diagram. The problems also contained areas…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
... section of the published AD, we incorrectly included Cessna 188 series airplanes. In the Unsafe Condition... sections of the AD incorrectly included Cessna 188 series airplanes. The Unsafe Condition section is... the second column, on line 10, under the heading DEPARTMENT OF TRANSPORTATION, remove 188 from...
Joshi, Nehal; Bolorhon, Bolormaa; Narula, Indermohan; Zhu, Shihua; Manaseki-Holland, Semira
2018-03-22
Unfortunately, after publication of this article [1], it was noticed that an error during the production process resulted in an incorrect author name. The author Semira Manaseki-Holland is incorrectly displayed as Semira Manaseki-Hollan. The full, corrected author list can be seen above.
Source-Constrained Recall: Front-End and Back-End Control of Retrieval Quality
ERIC Educational Resources Information Center
Halamish, Vered; Goldsmith, Morris; Jacoby, Larry L.
2012-01-01
Research on the strategic regulation of memory accuracy has focused primarily on monitoring and control processes used to edit out incorrect information after it is retrieved (back-end control). Recent studies, however, suggest that rememberers also enhance accuracy by preventing the retrieval of incorrect information in the first place (front-end…
The Effectiveness of Using Incorrect Examples to Support Learning about Decimal Magnitude
ERIC Educational Resources Information Center
Durkin, Kelley; Rittle-Johnson, Bethany
2012-01-01
Comparing common mathematical errors to correct examples may facilitate learning, even for students with limited prior domain knowledge. We examined whether studying incorrect and correct examples was more effective than studying two correct examples across prior knowledge levels. Fourth- and fifth-grade students (N = 74) learned about decimal…
Smith, Michelle K.; Knight, Jennifer K.
2012-01-01
To help genetics instructors become aware of fundamental concepts that are persistently difficult for students, we have analyzed the evolution of student responses to multiple-choice questions from the Genetics Concept Assessment. In total, we examined pretest (before instruction) and posttest (after instruction) responses from 751 students enrolled in six genetics courses for either majors or nonmajors. Students improved on all 25 questions after instruction, but to varying degrees. Notably, there was a subgroup of nine questions for which a single incorrect answer, called the most common incorrect answer, was chosen by >20% of students on the posttest. To explore response patterns to these nine questions, we tracked individual student answers before and after instruction and found that particular conceptual difficulties about genetics are both more likely to persist and more likely to distract students than other incorrect ideas. Here we present an analysis of the evolution of these incorrect ideas to encourage instructor awareness of these genetics concepts and provide advice on how to address common conceptual difficulties in the classroom. PMID:22367036
Smith, Michelle K; Knight, Jennifer K
2012-05-01
To help genetics instructors become aware of fundamental concepts that are persistently difficult for students, we have analyzed the evolution of student responses to multiple-choice questions from the Genetics Concept Assessment. In total, we examined pretest (before instruction) and posttest (after instruction) responses from 751 students enrolled in six genetics courses for either majors or nonmajors. Students improved on all 25 questions after instruction, but to varying degrees. Notably, there was a subgroup of nine questions for which a single incorrect answer, called the most common incorrect answer, was chosen by >20% of students on the posttest. To explore response patterns to these nine questions, we tracked individual student answers before and after instruction and found that particular conceptual difficulties about genetics are both more likely to persist and more likely to distract students than other incorrect ideas. Here we present an analysis of the evolution of these incorrect ideas to encourage instructor awareness of these genetics concepts and provide advice on how to address common conceptual difficulties in the classroom.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
NASA Astrophysics Data System (ADS)
Stillinger, T.; Dozier, J.; Phares, N.; Rittger, K.
2015-12-01
Discrimination between snow and clouds poses a serious but tractable challenge to the consistent delivery of high-quality information on mountain snow from remote sensing. Clouds obstruct the surface from the sensor's view, and the similar optical properties of clouds and snow make accurate discrimination difficult. We assess the performance of the current Landsat 8 operational snow and cloud mask products (LDCM CCA and CFmask), along with a new method, using over one million manually identified snow and clouds pixels in Landsat 8 scenes. The new method uses physically based scattering models to generate spectra in each Landsat 8 band, at that scene's solar illumination, for snow and cloud particle sizes that cover the plausible range for each. The modeled spectra are compared to pixels' spectra via several independent ways to identify snow and clouds. The results are synthesized to create a final snow/cloud mask, and the method can be applied to any multispectral imager with bands covering the visible, near-infrared, and shortwave-infrared regions. Each algorithm we tested misidentifies snow and clouds in both directions to varying degrees. We assess performance with measures of Precision, Recall, and the F statistic, which are based on counts of true and false positives and negatives. Tests for significance in differences between spectra in the measured and modeled values among incorrectly identified pixels help ascertain reasons for misidentification. A cloud mask specifically designed to separate snow from clouds is a valuable tool for those interested in remotely sensing snow cover. Given freely available remote sensing datasets and computational tools to feasibly process entire mission histories for an area of interest, enabling researchers to reliably identify and separate snow and clouds increases the usability of the data for hydrological and climatological studies.
Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G
Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.
Jones, Roger A C; Kehoe, Monica A
2016-07-01
Current approaches used to name within-species, plant virus phylogenetic groups are often misleading and illogical. They involve names based on biological properties, sequence differences and geographical, country or place-association designations, or any combination of these. This type of nomenclature is becoming increasingly unsustainable as numbers of sequences of the same virus from new host species and different parts of the world increase. Moreover, this increase is accelerating as world trade and agriculture expand, and climate change progresses. Serious consequences for virus research and disease management might arise from incorrect assumptions made when current within-species phylogenetic group names incorrectly identify properties of group members. This could result in development of molecular tools that incorrectly target dangerous virus strains, potentially leading to unjustified impediments to international trade or failure to prevent such strains being introduced to countries, regions or continents formerly free of them. Dangerous strains might be missed or misdiagnosed by diagnostic laboratories and monitoring programs, and new cultivars with incorrect strain-specific resistances released. Incorrect deductions are possible during phylogenetic analysis of plant virus sequences and errors from strain misidentification during molecular and biological virus research activities. A nomenclature system for within-species plant virus phylogenetic group names is needed which avoids such problems. We suggest replacing all other naming approaches with Latinized numerals, restricting biologically based names only to biological strains and removing geographically based names altogether. Our recommendations have implications for biosecurity authorities, diagnostic laboratories, disease-management programs, plant breeders and researchers.
Tembuyser, Lien; Ligtenberg, Marjolijn J L; Normanno, Nicola; Delen, Sofie; van Krieken, J Han; Dequeker, Elisabeth M C
2014-05-01
Precision medicine is now a key element in clinical oncology. RAS mutational status is a crucial predictor of responsiveness to anti-epidermal growth factor receptor agents in metastatic colorectal cancer. In an effort to guarantee high-quality testing services in molecular pathology, the European Society of Pathology has been organizing an annual KRAS external quality assessment program since 2009. In 2012, 10 formalin-fixed, paraffin-embedded samples, of which 8 from invasive metastatic colorectal cancer tissue and 2 artificial samples of cell line material, were sent to more than 100 laboratories from 26 countries with a request for routine KRAS testing. Both genotyping and clinical reports were assessed independently. Twenty-seven percent of the participants genotyped at least 1 of 10 samples incorrectly. In total, less than 5% of the distributed specimens were genotyped incorrectly. Genotyping errors consisted of false negatives, false positives, and incorrectly genotyped mutations. Twenty percent of the laboratories reported a technical error for one or more samples. A review of the written reports showed that several essential elements were missing, most notably a clinical interpretation of the test result, the method sensitivity, and the use of a reference sequence. External quality assessment serves as a valuable educational tool in assessing and improving molecular testing quality and is an important asset for monitoring quality assurance upon incorporation of new biomarkers in diagnostic services. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Tan, Andy S. L.; Bigman, Cabral A.; Henriksen, Lisa
2015-01-01
Objectives: To examine young adults’ knowledge of e-cigarette constituents and regulation and its association with product use and self-reported exposure to marketing. Methods: Young adults (18–34 years, N = 1,247) from a U.S. web panel were surveyed in March 2014. Using multinomial logistic regressions, self-reported exposure to marketing was examined as a predictor of whether participants responded correctly (reference category), incorrectly, or “don’t know” to four knowledge items—whether e-cigarettes contain nicotine, contain toxic chemicals, are regulated by government for safety, and are regulated for use as a cessation aid. Analyses adjusted for demographics and smoking status and were weighted to match the U.S. young adult population. Results: Most respondents did not know if e-cigarettes, contain toxic chemicals (48%), are regulated for safety (61%), and are regulated as cessation aids (68%); fewer than 37% answered all of these items correctly. Current users of e-cigarettes (past 30 days) had a lower likelihood of being incorrect about safety testing (p = .006) and being regulated as a cessation aid (p = .017). Higher exposure to e-cigarette marketing was associated with a lower likelihood of responding “don’t know” than being correct, and with a higher likelihood of being incorrect as opposed to correct about e-cigarettes containing nicotine. Conclusions: Knowledge about e-cigarette constituents and regulation was low among young adults, who are the largest consumer group for these products. Interventions, such as warning labels or information campaigns, may be necessary to educate and correct misinformation about these products. PMID:25542915
Potential effects of reward and loss avoidance in overweight adolescents
Reyes, Sussanne; Peirano, Patricio; Luna, Beatriz; Lozoff, Betsy; Algarín, Cecilia
2015-01-01
Background Reward system and inhibitory control are brain functions that exert an influence on eating behavior regulation. We studied the differences in inhibitory control and sensitivity to reward and loss avoidance between overweight/obese and normal-weight adolescents. Methods We assessed 51 overweight/obese and 52 normal-weight 15-y-old Chilean adolescents. The groups were similar regarding sex and intelligence quotient. Using Antisaccade and Incentive tasks, we evaluated inhibitory control and the effect of incentive trials (neutral, loss avoidance, and reward) on generating correct and incorrect responses (latency and error rate). Results Compared to normal-weight group participants, overweight/obese adolescents showed shorter latency for incorrect antisaccade responses (186.0 (95% CI: 176.8–195.2) vs. 201.3 ms (95% CI: 191.2–211.5), P < 0.05) and better performance reflected by lower error rate in incentive trials (43.6 (95% CI: 37.8–49.4) vs. 53.4% (95% CI: 46.8–60.0), P < 0.05). Overweight/obese adolescents were more accurate on loss avoidance (40.9 (95% CI: 33.5–47.7) vs. 49.8% (95% CI: 43.0–55.1), P < 0.05) and reward (41.0 (95% CI: 34.5–47.5) vs. 49.8% (95% CI: 43.0–55.1), P < 0.05) compared to neutral trials. Conclusion Overweight/obese adolescents showed shorter latency for incorrect responses and greater accuracy in reward and loss avoidance trials. These findings could suggest that an imbalance of inhibition and reward systems influence their eating behavior. PMID:25927543
NASA Astrophysics Data System (ADS)
Sośnica, Krzysztof; Prange, Lars; Kaźmierski, Kamil; Bury, Grzegorz; Drożdżewski, Mateusz; Zajdel, Radosław; Hadas, Tomasz
2018-02-01
The space segment of the European Global Navigation Satellite System (GNSS) Galileo consists of In-Orbit Validation (IOV) and Full Operational Capability (FOC) spacecraft. The first pair of FOC satellites was launched into an incorrect, highly eccentric orbital plane with a lower than nominal inclination angle. All Galileo satellites are equipped with satellite laser ranging (SLR) retroreflectors which allow, for example, for the assessment of the orbit quality or for the SLR-GNSS co-location in space. The number of SLR observations to Galileo satellites has been continuously increasing thanks to a series of intensive campaigns devoted to SLR tracking of GNSS satellites initiated by the International Laser Ranging Service. This paper assesses systematic effects and quality of Galileo orbits using SLR data with a main focus on Galileo satellites launched into incorrect orbits. We compare the SLR observations with respect to microwave-based Galileo orbits generated by the Center for Orbit Determination in Europe (CODE) in the framework of the International GNSS Service Multi-GNSS Experiment for the period 2014.0-2016.5. We analyze the SLR signature effect, which is characterized by the dependency of SLR residuals with respect to various incidence angles of laser beams for stations equipped with single-photon and multi-photon detectors. Surprisingly, the CODE orbit quality of satellites in the incorrect orbital planes is not worse than that of nominal FOC and IOV orbits. The RMS of SLR residuals is even lower by 5.0 and 1.5 mm for satellites in the incorrect orbital planes than for FOC and IOV satellites, respectively. The mean SLR offsets equal -44.9, -35.0, and -22.4 mm for IOV, FOC, and satellites in the incorrect orbital plane. Finally, we found that the empirical orbit models, which were originally designed for precise orbit determination of GNSS satellites in circular orbits, provide fully appropriate results also for highly eccentric orbits with variable linear and angular velocities.
NASA Astrophysics Data System (ADS)
Nuccitelli, Dana; Cowtan, Kevin; Jacobs, Peter; Richardson, Mark; Way, Robert G.; Blackburn, Anne-Marie; Stolpe, Martin B.; Cook, John
2014-04-01
Lu (2013) (L13) argued that solar effects and anthropogenic halogenated gases can explain most of the observed warming of global mean surface air temperatures since 1850, with virtually no contribution from atmospheric carbon dioxide (CO2) concentrations. Here we show that this conclusion is based on assumptions about the saturation of the CO2-induced greenhouse effect that have been experimentally falsified. L13 also confuses equilibrium and transient response, and relies on data sources that have been superseeded due to known inaccuracies. Furthermore, the statistical approach of sequential linear regression artificially shifts variance onto the first predictor. L13's artificial choice of regression order and neglect of other relevant data is the fundamental cause of the incorrect main conclusion. Consideration of more modern data and a more parsimonious multiple regression model leads to contradiction with L13's statistical results. Finally, the correlation arguments in L13 are falsified by considering either the more appropriate metric of global heat accumulation, or data on longer timescales.
Campbell, H; Macdonald, S; Richardson, P
1997-03-01
To pilot data collection instruments and to make a preliminary estimate of the level of incorrect use of car seat belts and child restraints in Fife, Scotland. Cross sectional survey of cars containing adults and children at a number of public sites across Fife in 1995 to assess use of car occupant restraints. Trained road safety officers assessed whether seat restraints were appropriate for the age of the passengers and whether restraints were used correctly. These assessments were based on standards published by the Child Accident Prevention Trust. The survey gathered data from 596 occupants in 180 cars: 327 adults and 269 children. Ten per cent of drivers who were approached refused to participate. Car occupant restraint was assessed in 180 drivers, 151 front seat passengers, and 265 rear seat passengers. Three hundred and sixty one occupants wore seat belts, 68 were restrained by a seat belt and booster cushion, 63 in toddler seats, 25 in two way seats, and 18 in rear facing infant carriers. Ninety seven per cent of drivers, 95% of front seat passengers, and 77% of rear seat passengers were restrained. However, in 98 (52%) vehicles at least one passenger was restrained by a device that was used incorrectly. Seven per cent of adults and 28% of children were secured incorrectly. The commonest errors were loose seat belts and restraint devices not adequately secured to the seat. Rates of incorrect use were highest in child seat restraints, reaching 60% with two way seats and 44% with rear facing infant seats. The incorrect use of car occupant restraints is an under-recognised problem, both by health professionals, and the general public. Incorrect use has been shown to reduce the effectiveness of restraints, can itself result in injury, and is likely to be an important factor in child passenger injuries. The correct use of car seat restraints merits greater attention in strategies aiming to reduce road traffic casualties. Areas of intervention that could be considered include raising public awareness of this problem, improving information and instruction given to those who purchase child restraints, and encouraging increased collaboration between manufacturers of cars and child restraints, in considering safety issues.
Jegede, Kolawole; Whang, Peter; Grauer, Jonathan N
2011-07-06
Physician disclosure of potential conflicts of interest is currently controversial. To address this issue, orthopaedic societies have implemented a variety of guidelines related to potential conflict-of-interest disclosure. Transparency is crucial to address the concerns about potential conflict-of-interest disclosure. Nonetheless, prior studies have noted substantial discrepancies in disclosures to societies for individual authors who present their research work at multiple conferences. Our goal was to evaluate the ability of orthopaedic surgeons to interpret disclosure policy statements regarding project-specific or global disclosure instructions. The disclosure policy statements of the ten conferences most frequently attended by this group were collected, and selected statements were compiled into a questionnaire survey that was administered to orthopaedic faculty and trainees at our institution. Subjects were asked to read each statement and identify whether they interpreted the policy to be requesting project-specific disclosures (potential conflict of interest related to the research work in the abstract being submitted) or global disclosure (inclusive of all potential conflicts of interest, including those not associated with the abstract being submitted). The correct responses were identified by communicating with the individual societies and determining the responses desired by the society. The study had a 100% return rate from seventeen orthopaedic faculty, twenty-five orthopaedic residents and fellows, and twenty-five medical students. The average number of incorrect responses to the ten questions was 2.8. Forty-six percent of respondents had three or more incorrect responses, 24% had two incorrect responses, 19% had one incorrect response, and 10% had no incorrect responses. There was no significant difference in responses between those of different training levels. Subjects were no more likely to answer a project-specific question incorrectly than they were to answer a global question incorrectly. This study clearly demonstrated a discrepancy between what societies intend to identify with disclosure policies and what the orthopaedist interprets is intended. Almost half of those completing the survey did not correctly understand the intention of three or more of the policies, even with expected study intent bias. This study showed that the language used in disclosure policy statements and the lack of a uniform policy may be a cause of substantial discrepancies in potential conflict-of-interest disclosure.
Campbell, H.; Macdonald, S.; Richardson, P.
1997-01-01
OBJECTIVE: To pilot data collection instruments and to make a preliminary estimate of the level of incorrect use of car seat belts and child restraints in Fife, Scotland. DESIGN: Cross sectional survey of cars containing adults and children at a number of public sites across Fife in 1995 to assess use of car occupant restraints. Trained road safety officers assessed whether seat restraints were appropriate for the age of the passengers and whether restraints were used correctly. These assessments were based on standards published by the Child Accident Prevention Trust. PARTICIPANTS: The survey gathered data from 596 occupants in 180 cars: 327 adults and 269 children. Ten per cent of drivers who were approached refused to participate. Car occupant restraint was assessed in 180 drivers, 151 front seat passengers, and 265 rear seat passengers. MAIN RESULTS: Three hundred and sixty one occupants wore seat belts, 68 were restrained by a seat belt and booster cushion, 63 in toddler seats, 25 in two way seats, and 18 in rear facing infant carriers. Ninety seven per cent of drivers, 95% of front seat passengers, and 77% of rear seat passengers were restrained. However, in 98 (52%) vehicles at least one passenger was restrained by a device that was used incorrectly. Seven per cent of adults and 28% of children were secured incorrectly. The commonest errors were loose seat belts and restraint devices not adequately secured to the seat. Rates of incorrect use were highest in child seat restraints, reaching 60% with two way seats and 44% with rear facing infant seats. CONCLUSIONS: The incorrect use of car occupant restraints is an under-recognised problem, both by health professionals, and the general public. Incorrect use has been shown to reduce the effectiveness of restraints, can itself result in injury, and is likely to be an important factor in child passenger injuries. The correct use of car seat restraints merits greater attention in strategies aiming to reduce road traffic casualties. Areas of intervention that could be considered include raising public awareness of this problem, improving information and instruction given to those who purchase child restraints, and encouraging increased collaboration between manufacturers of cars and child restraints, in considering safety issues. PMID:9113842
Hypnotherapy: Fact or Fiction: A Review in Palliative Care and Opinions of Health Professionals
Desai, Geetha; Chaturvedi, Santosh K; Ramachandra, Srinivasa
2011-01-01
Context: Complementary medicine like hypnotherapy is often used for pain and palliative care. Health professionals vary in views about hypnotherapy, its utility, value, and attitudes. Aims: To understand the opinions of health professionals on hypnotherapy. Settings and Design: A semi-qualitative method to survey opinions of the health professionals from various disciplines attending a programme on hypnotherapy was conducted. Materials and Methods: The survey form consisted of 32 statements about hypnosis and hypnotherapy. Participants were asked to indicate whether they agreed, disagreed, or were not sure about each statement. A qualitative feedback form was used to obtain further views about hypnotherapy. Statistical Analysis Used: Percentage, frequency distribution. Results: The sample consisted of 21 participants from various disciplines. Two-thirds of the participants gave correct responses to statements on dangerousness of hypnosis (90%), weak mind and hypnosis (86%), and hypnosis as therapy (81%). The participants gave incorrect responses about losing control in hypnosis (57%), hypnosis being in sleep (62%), and becoming dependent on hypnotist (62%). Participants were not sure if one could not hear the hypnotist one is not hypnotized (43%) about the responses on gender and hypnosis (38%), hypnosis leading to revealing secrets (23%). Conclusions: Despite patients using complementary medicine services, often health professionals are unaware of the issues associated with these services. These myths may interfere in using hypnotherapy as therapeutic tool in palliative care. It is important for health professionals to have an appropriate and evidence-based understanding about the complementary therapies including hypnotherapy. PMID:21976856
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
Accuracy of medical subject heading indexing of dental survival analyses.
Layton, Danielle M; Clarke, Michael
2014-01-01
To assess the Medical Subject Headings (MeSH) indexing of articles that employed time-to-event analyses to report outcomes of dental treatment in patients. Articles published in 2008 in 50 dental journals with the highest impact factors were hand searched to identify articles reporting dental treatment outcomes over time in human subjects with time-to-event statistics (included, n = 95), without time-to-event statistics (active controls, n = 91), and all other articles (passive controls, n = 6,769). The search was systematic (kappa 0.92 for screening, 0.86 for eligibility). Outcome-, statistic- and time-related MeSH were identified, and differences in allocation between groups were analyzed with chi-square and Fischer exact statistics. The most frequently allocated MeSH for included and active control articles were "dental restoration failure" (77% and 52%, respectively) and "treatment outcome" (54% and 48%, respectively). Outcome MeSH was similar between these groups (86% and 77%, respectively) and significantly greater than passive controls (10%, P < .001). Significantly more statistical MeSH were allocated to the included articles than to the active or passive controls (67%, 15%, and 1%, respectively, P < .001). Sixty-nine included articles specifically used Kaplan-Meier or life table analyses, but only 42% (n = 29) were indexed as such. Significantly more time-related MeSH were allocated to the included than the active controls (92% and 79%, respectively, P = .02), or to the passive controls (22%, P < .001). MeSH allocation within MEDLINE to time-to-event dental articles was inaccurate and inconsistent. Statistical MeSH were omitted from 30% of the included articles and incorrectly allocated to 15% of active controls. Such errors adversely impact search accuracy.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
NASA Astrophysics Data System (ADS)
Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; König, A.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rad, N.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Strauss, J.; Waltenberger, W.; Wulz, C.-E.; Chekhovsky, V.; Dvornikov, O.; Dydyshka, Y.; Emeliantchik, I.; Litomin, A.; Makarenko, V.; Mossolov, V.; Stefanovitch, R.; Suarez Gonzalez, J.; Zykunov, V.; Shumeiko, N.; Alderweireldt, S.; De Wolf, E. A.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Lowette, S.; Moortgat, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Skovpen, K.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Brun, H.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Luetic, J.; Maerschalk, T.; Marinov, A.; Randle-conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Vannerom, D.; Yonamine, R.; Zenoni, F.; Zhang, F.; Cimmino, A.; Cornelis, T.; Dobur, D.; Fagot, A.; Gul, M.; Khvastunov, I.; Poyraz, D.; Salva, S.; Schöfbeck, R.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Bakhshiansohi, H.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Jafari, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Nuttens, C.; Piotrzkowski, K.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Cheng, T.; Jiang, C. H.; Leggat, D.; Liu, Z.; Romeo, F.; Ruan, M.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Zhao, J.; Ban, Y.; Chen, G.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Mesic, B.; Micanovic, S.; Sudic, L.; Susa, T.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Tsiakkouri, D.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Abdelalim, A. A.; Mohamed, A.; Mohamed, A.; Kadastik, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Järvinen, T.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Abdulsalam, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Miné, P.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Le Bihan, A.-C.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sabes, D.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Feld, L.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Preuten, M.; Schomakers, C.; Schulz, J.; Verlage, T.; Albert, A.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hamer, M.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Olschewski, M.; Padeken, K.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Flügge, G.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Müller, T.; Nehrkorn, A.; Nowack, A.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Arndt, T.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Bin Anuar, A. A.; Borras, K.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Grados Luyando, J. M.; Grohsjean, A.; Gunnellini, P.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Ntomari, E.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Seitz, C.; Spannagel, S.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hoffmann, M.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Poehlsen, J.; Sander, C.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baur, S.; Baus, C.; Berger, J.; Butz, E.; Caspart, R.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Fink, S.; Freund, B.; Friese, R.; Giffels, M.; Gilbert, A.; Goldenzweig, P.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Katkov, I.; Kudella, S.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Filipovic, N.; Bencze, G.; Hajdu, C.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Bahinipati, S.; Choudhury, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Kumari, P.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Malhotra, S.; Naimuddin, M.; Nishu, N.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, R.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Kole, G.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sur, N.; Sutar, B.; Banerjee, S.; Bhowmik, S.; Dewanjee, R. K.; Ganguly, S.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Pandey, S.; Rane, A.; Sharma, S.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Brivio, F.; Ciriolo, V.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; De Nardo, G.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Fienga, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Gasparini, U.; Gozzelino, A.; Gulmini, M.; Lacaprara, S.; Margoni, M.; Maron, G.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Ventura, S.; Zanetti, M.; Zotto, P.; Zumerle, G.; Braghieri, A.; Fallavollita, F.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; SavoyNavarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; Del Re, D.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Marzocchi, B.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Cenna, F.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Monteno, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Shchelina, K.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, S.; Lee, S. W.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Lee, A.; Kim, H.; Brochero Cifuentes, J. A.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Lee, H.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. h.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Komaragiri, J. R.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Magaña Villalba, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Carpinteyro, S.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Calpas, B.; Di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Afanasiev, S.; Alexakhin, V.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Karjavin, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Skatchkov, N.; Smirnov, V.; Voytishin, N.; Zarubin, A.; Chtchipounov, L.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Murzin, V.; Oreshkin, V.; Sulimov, V.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Toms, M.; Vlasov, E.; Zhokin, A.; Bylinkin, A.; Chadeeva, M.; Markin, O.; Rusinov, V.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Blinov, V.; Skovpen, Y.; Shtol, D.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Barrio Luna, M.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Suárez Andrés, I.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Castiñeiras De Saa, J. R.; Curras, E.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; Chen, Y.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Di Marco, E.; Dobson, M.; Dorney, B.; du Pree, T.; Duggan, D.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Everaerts, P.; Fartoukh, S.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gill, K.; Girone, M.; Glege, F.; Gulhan, D.; Gundacker, S.; Guthoff, M.; Hammer, J.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kieseler, J.; Kirschenmann, H.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Kousouris, K.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Milenovic, P.; Moortgat, F.; Morovic, S.; Mulders, M.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Sakulin, H.; Sauvan, J. B.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Sphicas, P.; Steggemann, J.; Stoye, M.; Takahashi, Y.; Tosi, M.; Treille, D.; Triossi, A.; Tsirou, A.; Veckalns, V.; Veres, G. I.; Verweij, M.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lecomte, P.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Schönenberger, M.; Starodumov, A.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; De Cosa, A.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Ngadiuba, J.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Yang, Y.; Zucchetta, A.; Candelise, V.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chang, Y. H.; Chao, Y.; Chen, K. F.; Chen, P. H.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Paganis, E.; Psallidas, A.; Tsai, J. f.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Tali, B.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Burton, D.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Dunne, P.; Elwood, A.; Futyan, D.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Nash, J.; Nikitenko, A.; Pela, J.; Penning, B.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Seez, C.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Cutts, D.; Garabedian, A.; Hakala, J.; Heintz, U.; Hogan, J. M.; Jesus, O.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Spencer, E.; Syarif, R.; Breedon, R.; Burns, D.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Bravo, C.; Cousins, R.; Dasgupta, A.; Florent, A.; Hauser, J.; Ignatenko, M.; Mccoll, N.; Saltzberg, D.; Schnaible, C.; Valuev, V.; Weber, M.; Bouvier, E.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Shrinivas, A.; Si, W.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Holzner, A.; Klein, D.; Krutelyov, V.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Amin, N.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Franco Sevilla, M.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Heller, R.; Incandela, J.; Mullin, S. D.; Ovcharova, A.; Qu, H.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Bendavid, J.; Bornheim, A.; Bunn, J.; Duarte, J.; Lawhorn, J. M.; Mott, A.; Newman, H. B.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Weinberg, M.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Winn, D.; Abdullin, S.; Albrow, M.; Apollinari, G.; Apresyan, A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Cremonesi, M.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strait, J.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Wu, Y.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Field, R. D.; Furic, I. K.; Konigsberg, J.; Korytov, A.; Low, J. F.; Ma, P.; Matchev, K.; Mei, H.; Mitselmakher, G.; Rank, D.; Shchutska, L.; Sperka, D.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, T.; Askew, A.; Bein, S.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Prosper, H.; Santra, A.; Yohay, R.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Jung, K.; Sandoval Gonzalez, I. D.; Varelas, N.; Wang, H.; Wu, Z.; Zakaria, M.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Osherson, M.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; Xin, Y.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Castle, J.; Forthomme, L.; Kenny, R. P.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Sanders, S.; Stringer, R.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Kunkle, J.; Lu, Y.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Apyan, A.; Azzolini, V.; Barbieri, R.; Baty, A.; Bi, R.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; D'Alfonso, M.; Demiragli, Z.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Krajczar, K.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Maier, B.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Tatar, K.; Varma, M.; Velicanu, D.; Veverka, J.; Wang, J.; Wang, T. W.; Wyslouch, B.; Yang, M.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bartek, R.; Bloom, K.; Claes, D. R.; Dominguez, A.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Malta Rodrigues, A.; Meier, F.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Alyari, M.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Kharchilava, A.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Bhattacharya, S.; Charaf, O.; Hahn, K. A.; Kumar, A.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Wayne, M.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Hughes, R.; Ji, W.; Liu, B.; Luo, W.; Puigh, D.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Lange, D.; Luo, J.; Marlow, D.; Medvedeva, T.; Mei, K.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Svyatkovskiy, A.; Tully, C.; Malik, S.; Barker, A.; Barnes, V. E.; Folgueras, S.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Khatiwada, A.; Miller, D. H.; Neumeister, N.; Schulte, J. F.; Shi, X.; Sun, J.; Wang, F.; Xie, W.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. t.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Agapitos, A.; Chou, J. P.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Nash, K.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Delannoy, A. G.; Foerster, M.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Juska, E.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; De Guio, F.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Sturdy, J.; Belknap, D. A.; Buchanan, J.; Caillol, C.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.
2018-01-01
In the original paper, figure 10 was incorrect. The correct figure is shown below. Additionally, the unparticle entries in table 3, as well as figure 4 and 5 were labelled with incorrect values of ΛU.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Earnings reported without a social security number or with an incorrect employee name or social security number. 422.120 Section 422.120 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ORGANIZATION AND PROCEDURES General Procedures § 422.120 Earnings...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-17
... materials for the Columbia River Crossing. The document contained an incorrect phone number for the Columbia... Columbia River Crossing.'' (78 FR 26380). Mistakenly, the phone number for the person listed in the FOR FURTHER INFORMATION CONTACT section was incorrect. The correct phone number for Gary Greene, Columbia...
Author Correction: Uplift of the central transantarctic mountains.
Wannamaker, Phil; Hill, Graham; Stodt, John; Maris, Virginie; Ogawa, Yasuo; Selway, Kate; Boren, Goran; Bertrand, Edward; Uhlmann, Daniel; Ayling, Bridget; Green, A Marie; Feucht, Daniel
2018-02-16
The original version of this Article incorrectly referenced the Figures in the Supplementary Information. References in the main Article to Supplementary Figure 7 through to Supplementary Figure 20 were previously incorrectly cited as Supplementary Figure 5 through to Supplementary Figure 18, respectively. This has now been corrected in both the PDF and HTML versions of the Article.
Uncovering Students' Incorrect Ideas about Foundational Concepts for Biochemistry
ERIC Educational Resources Information Center
Villafane, Sachel M.; Loertscher, Jennifer; Minderhout, Vicky; Lewis, Jennifer E.
2011-01-01
This paper presents preliminary data on how an assessment instrument with a unique structure can be used to identify common incorrect ideas from prior coursework at the beginning of a biochemistry course, and to determine whether these ideas have changed by the end of the course. The twenty-one multiple-choice items address seven different…
Fragile Associations Coexist with Robust Memories for Precise Details in Long-Term Memory
ERIC Educational Resources Information Center
Lew, Timothy F.; Pashler, Harold E.; Vul, Edward
2016-01-01
What happens to memories as we forget? They might gradually lose fidelity, lose their associations (and thus be retrieved in response to the incorrect cues), or be completely lost. Typical long-term memory studies assess memory as a binary outcome (correct/incorrect), and cannot distinguish these different kinds of forgetting. Here we assess…
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
Automated Scoring of an Interactive Geometry Item: A Proof-of-Concept
ERIC Educational Resources Information Center
Masters, Jessica
2010-01-01
An online interactive geometry item was developed to explore students' abilities to create prototypical and "tilted" rectangles out of line segments. The item was administered to 1,002 students. The responses to the item were hand-coded as correct, incorrect, or incorrect with possible evidence of a misconception. A variation of the nearest…
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Earnings reported without a social security number or with an incorrect employee name or social security number. 422.120 Section 422.120 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ORGANIZATION AND PROCEDURES General Procedures § 422.120 Earnings...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Earnings reported without a social security number or with an incorrect employee name or social security number. 422.120 Section 422.120 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ORGANIZATION AND PROCEDURES General Procedures § 422.120 Earnings...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Earnings reported without a social security number or with an incorrect employee name or social security number. 422.120 Section 422.120 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ORGANIZATION AND PROCEDURES General Procedures § 422.120 Earnings...
ERIC Educational Resources Information Center
Booth, Julie L.; Lange, Karin E.; Koedinger, Kenneth R.; Newton, Kristie J.
2013-01-01
In a series of two "in vivo" experiments, we examine whether correct and incorrect examples with prompts for self-explanation can be effective for improving students' conceptual understanding and procedural skill in Algebra when combined with guided practice. In Experiment 1, students working with the Algebra I Cognitive Tutor were randomly…
ERIC Educational Resources Information Center
Booth, Julie L.; Lange, Karin E.; Koedinger, Kenneth R.; Newton, Kristie J.
2013-01-01
In a series of two in vivo experiments, we examine whether correct and incorrect examples with prompts for self-explanation can be effective for improving students' conceptual understanding and procedural skill in Algebra when combined with guided practice. In Experiment 1, students working with the Algebra I Cognitive Tutor were randomly assigned…
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Earnings reported without a social security number or with an incorrect employee name or social security number. 422.120 Section 422.120 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ORGANIZATION AND PROCEDURES General Procedures § 422.120 Earnings...
30 CFR 1218.40 - Assessments for incorrect or late reports and failure to report.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Assessments for incorrect or late reports and failure to report. 1218.40 Section 1218.40 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE COLLECTION OF ROYALTIES, RENTALS, BONUSES, AND OTHER MONIES DUE THE FEDERAL GOVERNMENT General...
30 CFR 1218.40 - Assessments for incorrect or late reports and failure to report.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Assessments for incorrect or late reports and failure to report. 1218.40 Section 1218.40 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE COLLECTION OF ROYALTIES, RENTALS, BONUSES, AND OTHER MONIES DUE THE FEDERAL GOVERNMENT General...
30 CFR 1218.40 - Assessments for incorrect or late reports and failure to report.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Assessments for incorrect or late reports and failure to report. 1218.40 Section 1218.40 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR Natural Resources Revenue COLLECTION OF MONIES AND PROVISION FOR GEOTHERMAL CREDITS AND INCENTIVES General...
30 CFR 218.40 - Assessments for incorrect or late reports and failure to report.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Assessments for incorrect or late reports and failure to report. 218.40 Section 218.40 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT COLLECTION OF MONIES AND PROVISION FOR GEOTHERMAL CREDITS AND INCENTIVES General Provisions § 218.40...
30 CFR 1218.40 - Assessments for incorrect or late reports and failure to report.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Assessments for incorrect or late reports and failure to report. 1218.40 Section 1218.40 Mineral Resources OFFICE OF NATURAL RESOURCES REVENUE, DEPARTMENT OF THE INTERIOR NATURAL RESOURCES REVENUE COLLECTION OF ROYALTIES, RENTALS, BONUSES, AND OTHER MONIES DUE THE FEDERAL GOVERNMENT General...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... program. The heading for this rule displayed a RIN number of 2506-AC29, which was incorrect. RIN number.... ACTION: Interim rule; correction. SUMMARY: The document advises that the interim rule for the Emergency Solutions Grants program, published on December 5, 2011, displayed an incorrect RIN number. This document...
Feigl, Guenther C; Hiergeist, Wolfgang; Fellner, Claudia; Schebesch, Karl-Michael M; Doenitz, Christian; Finkenzeller, Thomas; Brawanski, Alexander; Schlaier, Juergen
2014-01-01
Diffusion tensor imaging (DTI)-based tractography has become an integral part of preoperative diagnostic imaging in many neurosurgical centers, and other nonsurgical specialties depend increasingly on DTI tractography as a diagnostic tool. The aim of this study was to analyze the anatomic accuracy of visualized white matter fiber pathways using different, readily available DTI tractography software programs. Magnetic resonance imaging scans of the head of 20 healthy volunteers were acquired using a Siemens Symphony TIM 1.5T scanner and a 12-channel head array coil. The standard settings of the scans in this study were 12 diffusion directions and 5-mm slices. The fornices were chosen as an anatomic structure for the comparative fiber tracking. Identical data sets were loaded into nine different fiber tracking packages that used different algorithms. The nine software packages and algorithms used were NeuroQLab (modified tensor deflection [TEND] algorithm), Sörensen DTI task card (modified streamline tracking technique algorithm), Siemens DTI module (modified fourth-order Runge-Kutta algorithm), six different software packages from Trackvis (interpolated streamline algorithm, modified FACT algorithm, second-order Runge-Kutta algorithm, Q-ball [FACT algorithm], tensorline algorithm, Q-ball [second-order Runge-Kutta algorithm]), DTI Query (modified streamline tracking technique algorithm), Medinria (modified TEND algorithm), Brainvoyager (modified TEND algorithm), DTI Studio modified FACT algorithm, and the BrainLab DTI module based on the modified Runge-Kutta algorithm. Three examiners (a neuroradiologist, a magnetic resonance imaging physicist, and a neurosurgeon) served as examiners. They were double-blinded with respect to the test subject and the fiber tracking software used in the presented images. Each examiner evaluated 301 images. The examiners were instructed to evaluate screenshots from the different programs based on two main criteria: (i) anatomic accuracy of the course of the displayed fibers and (ii) number of fibers displayed outside the anatomic boundaries. The mean overall grade for anatomic accuracy was 2.2 (range, 1.1-3.6) with a standard deviation (SD) of 0.9. The mean overall grade for incorrectly displayed fibers was 2.5 (range, 1.6-3.5) with a SD of 0.6. The mean grade of the overall program ranking was 2.3 with a SD of 0.6. The overall mean grade of the program ranked number one (NeuroQLab) was 1.7 (range, 1.5-2.8). The mean overall grade of the program ranked last (BrainLab iPlan Cranial 2.6 DTI Module) was 3.3 (range, 1.7-4). The difference between the mean grades of these two programs was statistically highly significant (P < 0.0001). There was no statistically significant difference between the programs ranked 1-3: NeuroQLab, Sörensen DTI Task Card, and Siemens DTI module. The results of this study show that there is a statistically significant difference in the anatomic accuracy of the tested DTI fiber tracking programs. Although incorrectly displayed fibers could lead to wrong conclusions in the neurosciences field, which relies heavily on this noninvasive imaging technique, incorrectly displayed fibers in neurosurgery could lead to surgical decisions potentially harmful for the patient if used without intraoperative cortical stimulation. DTI fiber tracking presents a valuable noninvasive preoperative imaging tool, which requires further validation after important standardization of the acquisition and processing techniques currently available. Copyright © 2014 Elsevier Inc. All rights reserved.
Quantified Uncertainties in Comparative Life Cycle Assessment: What Can Be Concluded?
2018-01-01
Interpretation of comparative Life Cycle Assessment (LCA) results can be challenging in the presence of uncertainty. To aid in interpreting such results under the goal of any comparative LCA, we aim to provide guidance to practitioners by gaining insights into uncertainty-statistics methods (USMs). We review five USMs—discernibility analysis, impact category relevance, overlap area of probability distributions, null hypothesis significance testing (NHST), and modified NHST–and provide a common notation, terminology, and calculation platform. We further cross-compare all USMs by applying them to a case study on electric cars. USMs belong to a confirmatory or an exploratory statistics’ branch, each serving different purposes to practitioners. Results highlight that common uncertainties and the magnitude of differences per impact are key in offering reliable insights. Common uncertainties are particularly important as disregarding them can lead to incorrect recommendations. On the basis of these considerations, we recommend the modified NHST as a confirmatory USM. We also recommend discernibility analysis as an exploratory USM along with recommendations for its improvement, as it disregards the magnitude of the differences. While further research is necessary to support our conclusions, the results and supporting material provided can help LCA practitioners in delivering a more robust basis for decision-making. PMID:29406730
Northrup, Joseph M.; Hooten, Mevin B.; Anderson, Charles R.; Wittemyer, George
2013-01-01
Habitat selection is a fundamental aspect of animal ecology, the understanding of which is critical to management and conservation. Global positioning system data from animals allow fine-scale assessments of habitat selection and typically are analyzed in a use-availability framework, whereby animal locations are contrasted with random locations (the availability sample). Although most use-availability methods are in fact spatial point process models, they often are fit using logistic regression. This framework offers numerous methodological challenges, for which the literature provides little guidance. Specifically, the size and spatial extent of the availability sample influences coefficient estimates potentially causing interpretational bias. We examined the influence of availability on statistical inference through simulations and analysis of serially correlated mule deer GPS data. Bias in estimates arose from incorrectly assessing and sampling the spatial extent of availability. Spatial autocorrelation in covariates, which is common for landscape characteristics, exacerbated the error in availability sampling leading to increased bias. These results have strong implications for habitat selection analyses using GPS data, which are increasingly prevalent in the literature. We recommend researchers assess the sensitivity of their results to their availability sample and, where bias is likely, take care with interpretations and use cross validation to assess robustness.
Exploring the stability of ligand binding modes to proteins by molecular dynamics simulations.
Liu, Kai; Watanabe, Etsurou; Kokubo, Hironori
2017-02-01
The binding mode prediction is of great importance to structure-based drug design. The discrimination of various binding poses of ligand generated by docking is a great challenge not only to docking score functions but also to the relatively expensive free energy calculation methods. Here we systematically analyzed the stability of various ligand poses under molecular dynamics (MD) simulation. First, a data set of 120 complexes was built based on the typical physicochemical properties of drug-like ligands. Three potential binding poses (one correct pose and two decoys) were selected for each ligand from self-docking in addition to the experimental pose. Then, five independent MD simulations for each pose were performed with different initial velocities for the statistical analysis. Finally, the stabilities of ligand poses under MD were evaluated and compared with the native one from crystal structure. We found that about 94% of the native poses were maintained stable during the simulations, which suggests that MD simulations are accurate enough to judge most experimental binding poses as stable properly. Interestingly, incorrect decoy poses were maintained much less and 38-44% of decoys could be excluded just by performing equilibrium MD simulations, though 56-62% of decoys were stable. The computationally-heavy binding free energy calculation can be performed only for these survived poses.
Stackelroth, Jenny; Sinnott, Michael; Shaban, Ramon Z
2015-09-01
Existing research has consistently demonstrated poor compliance by health care workers with hand hygiene standards. This study examined the extent to which incorrect hand hygiene occurred as a result of the inability to easily distinguish between different hand hygiene solutions placed at washbasins. A direct observational method was used using ceiling-mounted, motion-activated video camera surveillance in a tertiary referral emergency department in Australia. Data from a 24-hour period on day 10 of the recordings were collected into the Hand Hygiene-Technique Observation Tool based on Feldman's criteria as modified by Larson and Lusk. A total of 459 episodes of hand hygiene were recorded by 6 video cameras in the 24-hour period. The observed overall rate of error in this study was 6.2% (27 episodes). In addition an overall rate of hesitation was 5.8% (26 episodes). There was no statistically significant difference in error rates with the 2 hand washbasin configurations. The amelioration of causes of error and hesitation by standardization of the appearance and relative positioning of hand hygiene solutions at washbasins may translate in to improved hand hygiene behaviors. Placement of moisturizer at the washbasin may not be essential. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-01-01
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = −0.002, mean squared error = 0.025; PTDM: bias = −1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995–2008). PMID:27455963
Zakharova, Irina B; Lopasteyskaya, Yana A; Toporkov, Andrey V; Viktorov, Dmitry V
2018-01-01
Background: Burkholderia pseudomallei is a Gram-negative saprophytic soil bacterium that causes melioidosis, a potentially fatal disease endemic in wet tropical areas. The currently available biochemical identification systems can misidentify some strains of B. pseudomallei. The aim of the present study was to identify the biochemical features of B. pseudomallei, which can affect its correct identification by Vitek 2 system. Materials and Methods: The biochemical patterns of 40 B. pseudomallei strains were obtained using Vitek 2 GN cards. The average contribution of biochemical tests in overall dissimilarities between correctly and incorrectly identified strains was assessed using nonmetric multidimensional scaling. Results: It was found (R statistic of 0.836, P = 0.001) that a combination of negative N-acetyl galactosaminidase, β-N-acetyl glucosaminidase, phosphatase, and positive D-cellobiase (dCEL), tyrosine arylamidase (TyrA), and L-proline arylamidase (ProA) tests leads to low discrimination of B. pseudomallei, whereas a set of positive dCEL and negative N-acetyl galactosaminidase, TyrA, and ProA determines the wrong identification of B. pseudomallei as Burkholderia cepacia complex. Conclusion: The further expansion of the Vitek 2 identification keys is needed for correct identification of atypical or regionally distributed biochemical profiles of B. pseudomallei. PMID:29563716
Sauco, Sebastián; Gómez, Julio; Barboza, Francisco R.; Lercari, Diego; Defeo, Omar
2013-01-01
Environmental gradients and wastewater discharges produce aggregated effects on marine populations, obscuring the detection of human impact. Classical assessment methods do not include environmental effects in toxicity tests designs, which could lead to incorrect conclusions. We proposed a modified Whole Effluent Toxicity test (mWET) that includes environmental gradients in addition to effluent dilutions, together with the application of Generalized Linear Mixed Models (GLMM) to assess and decouple those effects. We tested this approach, analyzing the lethal effects of wastewater on a marine sandy beach bivalve affected by an artificial canal freshwater discharge used for rice crops irrigation. To this end, we compared bivalve mortality between canal water dilutions (CWd) and salinity controls (SC: without canal water). CWd were prepared by diluting the water effluent (sampled during the pesticide application period) with artificial marine water. The salinity gradient was included in the design by achieving the same final salinities in both CWd and SC, allowing us to account for the effects of salinity by including this variable as a random factor in the GLMM. Our approach detected significantly higher mortalities in CWd, indicating potential toxic effects of the effluent discharge. mWET represents an improvement over the internationally standardized WET tests, since it considers environmental variability and uses appropriate statistical analyses. PMID:23755304
A combined long-range phasing and long haplotype imputation method to impute phase for SNP genotypes
2011-01-01
Background Knowing the phase of marker genotype data can be useful in genome-wide association studies, because it makes it possible to use analysis frameworks that account for identity by descent or parent of origin of alleles and it can lead to a large increase in data quantities via genotype or sequence imputation. Long-range phasing and haplotype library imputation constitute a fast and accurate method to impute phase for SNP data. Methods A long-range phasing and haplotype library imputation algorithm was developed. It combines information from surrogate parents and long haplotypes to resolve phase in a manner that is not dependent on the family structure of a dataset or on the presence of pedigree information. Results The algorithm performed well in both simulated and real livestock and human datasets in terms of both phasing accuracy and computation efficiency. The percentage of alleles that could be phased in both simulated and real datasets of varying size generally exceeded 98% while the percentage of alleles incorrectly phased in simulated data was generally less than 0.5%. The accuracy of phasing was affected by dataset size, with lower accuracy for dataset sizes less than 1000, but was not affected by effective population size, family data structure, presence or absence of pedigree information, and SNP density. The method was computationally fast. In comparison to a commonly used statistical method (fastPHASE), the current method made about 8% less phasing mistakes and ran about 26 times faster for a small dataset. For larger datasets, the differences in computational time are expected to be even greater. A computer program implementing these methods has been made available. Conclusions The algorithm and software developed in this study make feasible the routine phasing of high-density SNP chips in large datasets. PMID:21388557
A Matched Field Processing Framework for Coherent Detection Over Local and Regional Networks
2011-06-01
Northern Finland Seismological Network, FN) and to the University of Helsinki for data from the VRF and HEF stations (part of the Finnish seismograph ...shows the results of classification with the FK measurement . Most of the events are incorrectly assigned to one particular mine (K2 – Rasvumchorr...generalization of the single-phase matched field processing method that encodes the full structure of the entire wavefield? What would this
NASA Astrophysics Data System (ADS)
Goltz, Mark N.; Huang, Junqi
2014-12-01
We thank Sun (2014) for his comment on our paper, Goltz et al. (2009). The commenter basically makes two points: (1) equation (6) in Goltz et al. (2009) is incorrect, and (2) screen loss should be further considered as a source of error in the modified integral pump test (MIPT) experiment. We will address each of these points, below.
Mechanism-based strategies for protein thermostabilization.
Mozhaev, V V
1993-03-01
Strategies for stabilizing enzymes can be derived from a two-step model of irreversible inactivation that involves preliminary reversible unfolding, followed by an irreversible step. Reversible unfolding is best prevented by covalent immobilization, whereas methods such as covalent modification of amino acid residues or 'medium engineering' (by the addition of low-molecular-weight compounds) are effective against irreversible 'incorrect' refolding. Genetic modification of the protein sequence is the most effective approach for preventing chemical deterioration.
Ghazal, M; Albashaireh, Z S; Kern, M
2008-11-01
Restorations made on incorrectly mounted casts might require considerable intra-oral adjustments to correct the occlusion or might even necessitate a remake of the restoration. The aim of this study was to evaluate interocclusal recording materials for their ability to reproduce accurate vertical interocclusal relationships after a storage time of 1 and 48 h, respectively. A custom-made apparatus was used to simulate the maxilla and mandible. Eight interocclusal records were made in each of the following groups: G1: Aluwax (aluminium wax), G2: Beauty Pink wax (hydrocarbon wax compound), G3: Futar D, G4: Futar D Fast, G5: Futar Scan (G3-G5: vinyl polysiloxane), G6: Ramitec (polyether). The vertical discrepancies were measured by an inductive displacement transducer connected to a carrier frequency amplifier after storage of the records for two periods of 1 and 48 h. Two-way anova was used for statistical analysis. The mean vertical discrepancies in mum (1/48 h) for G1 (31/35) and G2 (35/38) were statistically significantly higher than for the other groups G3 (8/10), G4 (11/12), G5 (6/8) and G6 (5/8) (P < or = 0.05). There were no statistically significant differences between the elastomers tested. The effect of storage on the vertical discrepancies was statistically significant (P < 0.001). Vinyl polysiloxane and polyether interocclusal records can be used to relate working casts during mounting procedures without significant vertical displacement of the casts.
Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.
Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A
2013-11-01
We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.
Yeast identification: reassessment of assimilation tests as sole universal identifiers.
Spencer, J; Rawling, S; Stratford, M; Steels, H; Novodvorska, M; Archer, D B; Chandra, S
2011-11-01
To assess whether assimilation tests in isolation remain a valid method of identification of yeasts, when applied to a wide range of environmental and spoilage isolates. Seventy-one yeast strains were isolated from a soft drinks factory. These were identified using assimilation tests and by D1/D2 rDNA sequencing. When compared to sequencing, assimilation test identifications (MicroLog™) were 18·3% correct, a further 14·1% correct within the genus and 67·6% were incorrectly identified. The majority of the latter could be attributed to the rise in newly reported yeast species. Assimilation tests alone are unreliable as a universal means of yeast identification, because of numerous new species, variability of strains and increasing coincidence of assimilation profiles. Assimilation tests still have a useful role in the identification of common species, such as the majority of clinical isolates. It is probable, based on these results, that many yeast identifications reported in older literature are incorrect. This emphasizes the crucial need for accurate identification in present and future publications. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
Rota, Paola; Anastasia, Luigi; Allevi, Pietro
2015-05-07
The current analytical protocol used for the GC-MS determination of free or 1,7-lactonized natural sialic acids (Sias), as heptafluorobutyrates, overlooks several transformations. Using authentic reference standards and by combining GC-MS and NMR analyses, flaws in the analytical protocol were pinpointed and elucidated, thus establishing the scope and limitations of the method. It was demonstrated that (a) Sias 1,7-lactones, even if present in biological samples, decompose under the acidic hydrolysis conditions used for their release; (b) Sias 1,7-lactones are unpredicted artifacts, accidentally generated from their parent acids; (c) the N-acetyl group is quantitatively exchanged with that of the derivatizing perfluorinated anhydride; (d) the partial or complete failure of the Sias esterification-step with diazomethane leads to the incorrect quantification and structure attribution of all free Sias. While these findings prompt an urgent correction and improvement of the current analytical protocol, they could be instrumental for a critical revision of many incorrect claims reported in the literature.
Brown, Jessica A.; Zhang, Likui; Sherrer, Shanen M.; Taylor, John-Stephen; Burgers, Peter M. J.; Suo, Zucai
2010-01-01
Understanding polymerase fidelity is an important objective towards ascertaining the overall stability of an organism's genome. Saccharomyces cerevisiae DNA polymerase η (yPolη), a Y-family DNA polymerase, is known to efficiently bypass DNA lesions (e.g., pyrimidine dimers) in vivo. Using pre-steady-state kinetic methods, we examined both full-length and a truncated version of yPolη which contains only the polymerase domain. In the absence of yPolη's C-terminal residues 514–632, the DNA binding affinity was weakened by 2-fold and the base substitution fidelity dropped by 3-fold. Thus, the C-terminus of yPolη may interact with DNA and slightly alter the conformation of the polymerase domain during catalysis. In general, yPolη discriminated between a correct and incorrect nucleotide more during the incorporation step (50-fold on average) than the ground-state binding step (18-fold on average). Blunt-end additions of dATP or pyrene nucleotide 5′-triphosphate revealed the importance of base stacking during the binding of incorrect incoming nucleotides. PMID:20798853
ERIC Educational Resources Information Center
Son, Jiseong; Kim, Jeong-Dong; Na, Hong-Seok; Baik, Doo-Kwon
2016-01-01
In this research, we propose a Social Learning Management System (SLMS) enabling real-time and reliable feedback for incorrect answers by learners using a social network service (SNS). The proposed system increases the accuracy of learners' assessment results by using a confidence scale and a variety of social feedback that is created and shared…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
... Comment 7: Sauce Surrogate Value Comment 8: Targeted Dumping Comment 9: Calculation of the Separate Rate... Applied an Incorrect Unit of Measure for Sauce Comment 16: Whether the Department Incorrectly Applied Minh...: Treatment of Sauce Comment 22: Marine Insurance [FR Doc. 2013-22228 Filed 9-11-13; 8:45 am] BILLING CODE...
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-12
... incorrect and an entry to the table under Directional antennas in Sec. 101.115(b)(2) is incorrect. This... antennas. * * * * * (b) * * * (2) * * * Antenna Standards Maximum Minimum radiation suppression to angle in... \\1\\ antenna (included gain (dBi) 5[deg] 10[deg] 15[deg] 20[deg] 30[deg] 100[deg] 140[deg] angle in to...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... Film, Sheet, and Strip From Taiwan: Notice of Correction to the Final Results of the 2010-2011... film, sheet, and strip (PET Film) from Taiwan.\\1\\ The period of review covered July 1, 2010, through... an incorrect case number associated with PET Film from Taiwan (i.e., incorrect case number A-533-824...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-23
...The Department of State published a document in the Federal Register on August 19, 2013 concerning a U.S. Department of State Advisory Committee on Private International Law (ACPIL) Public Meeting on Arbitration, to take place on September 4, 2013. The document cited incorrect Web site addresses and an incorrect email address.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-26
... that notice we gave an incorrect contact phone number, which we now correct. Note that if you already... gave an incorrect contact phone number under FOR FURTHER INFORMATION CONTACT. We now supply the correct phone number, which is 503-231-2161. Note that if you already submitted a comment, you need not resubmit...
ERIC Educational Resources Information Center
Malone, Amelia Schneider; Loehr, Abbey M.; Fuchs, Lynn S.
2017-01-01
The purpose of the study was to determine whether individual differences in at-risk 4th graders' language comprehension, nonverbal reasoning, concept formation, working memory, and use of decimal labels (i.e., place value, point, incorrect place value, incorrect fraction, or whole number) are related to their decimal magnitude understanding.…
ERIC Educational Resources Information Center
Grunert, Megan L.; Raker, Jeffrey R.; Murphy, Kristen L.; Holme, Thomas A.
2013-01-01
The concept of assigning partial credit on multiple-choice test items is considered for items from ACS Exams. Because the items on these exams, particularly the quantitative items, use common student errors to define incorrect answers, it is possible to assign partial credits to some of these incorrect responses. To do so, however, it becomes…
Semiparametric time varying coefficient model for matched case-crossover studies.
Ortega-Villa, Ana Maria; Kim, Inyoung; Kim, H
2017-03-15
In matched case-crossover studies, it is generally accepted that the covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model. This is because any stratum effect is removed by the conditioning on the fixed number of sets of the case and controls in the stratum. Hence, the conditional logistic regression model is not able to detect any effects associated with the matching covariates by stratum. However, some matching covariates such as time often play an important role as an effect modification leading to incorrect statistical estimation and prediction. Therefore, we propose three approaches to evaluate effect modification by time. The first is a parametric approach, the second is a semiparametric penalized approach, and the third is a semiparametric Bayesian approach. Our parametric approach is a two-stage method, which uses conditional logistic regression in the first stage and then estimates polynomial regression in the second stage. Our semiparametric penalized and Bayesian approaches are one-stage approaches developed by using regression splines. Our semiparametric one stage approach allows us to not only detect the parametric relationship between the predictor and binary outcomes, but also evaluate nonparametric relationships between the predictor and time. We demonstrate the advantage of our semiparametric one-stage approaches using both a simulation study and an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis with drinking water turbidity. We also provide statistical inference for the semiparametric Bayesian approach using Bayes Factors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Mistakes in ultrasound examination of salivary glands
Jakubowski, Wiesław
2016-01-01
Ultrasonography is the first imaging method applied in the case of diseases of the salivary glands. The article discusses basic mistakes that can be made during an ultrasound examination of these structures. The reasons for these mistakes may be examiner-dependent or may be beyond their control. The latter may include, inter alia, difficult conditions during examination (technical or patient-related), similarity of ultrasound images in different diseases, the lack of clinical and laboratory data as well as the lack of results of other examinations, their insufficient number or incorrectness. Doctor-related mistakes include: the lack of knowledge of normal anatomy, characteristics of ultrasound images in various salivary gland diseases and statistical incidence of diseases, but also attaching excessive importance to such statistical data. The complex anatomical structures of the floor of the oral cavity may be mistaken for benign or malignant tumors. Fragments of correct anatomical structures (bones, arterial wall fibrosis, air bubbles in the mouth) can be wrongly interpreted as deposits in the salivary gland or in its excretory duct. Correct lymph nodes in the parotid glands may be treated as pathologic structures. Lesions not being a simple cyst, e.g. lymphoma, benign or malignant tumors of the salivary glands or metastatic lymph nodes, can be mistaken for one. The image of disseminated focal changes, both anechoic and solid, is not pathognomonic for specific diseases in the salivary glands. However, in part, it occurs typically and requires an extended differential diagnosis. Small focal changes and infiltrative lesions pose a diagnostic problem because their etiology cannot be safely suggested on the basis of an ultrasound examination itself. The safest approach is to refer patients with abnormal focal changes for an ultrasoundguided fine-needle aspiration biopsy. PMID:27446603
Incorrect predictions reduce switch costs.
Kleinsorge, Thomas; Scheil, Juliane
2015-07-01
In three experiments, we combined two sources of conflict within a modified task-switching procedure. The first source of conflict was the one inherent in any task switching situation, namely the conflict between a task set activated by the recent performance of another task and the task set needed to perform the actually relevant task. The second source of conflict was induced by requiring participants to guess aspects of the upcoming task (Exps. 1 & 2: task identity; Exp. 3: position of task precue). In case of an incorrect guess, a conflict accrues between the representation of the guessed task and the actually relevant task. In Experiments 1 and 2, incorrect guesses led to an overall increase of reaction times and error rates, but they reduced task switch costs compared to conditions in which participants predicted the correct task. In Experiment 3, incorrect guesses resulted in faster performance overall and to a selective decrease of reaction times in task switch trials when the cue-target interval was long. We interpret these findings in terms of an enhanced level of controlled processing induced by a combination of two sources of conflict converging upon the same target of cognitive control. Copyright © 2015 Elsevier B.V. All rights reserved.
Postoperative hand therapy in Dupuytren's disease.
Herweijer, Hester; Dijkstra, Pieter U; Nicolai, Jean-Philippe A; Van der Sluis, Corry K
2007-11-30
Postoperative hand therapy in patients after surgery for Dupuytren's contracture is common medical practice to improve outcomes. Until now, patients are referred for postoperative hand rehabilitation on an empirical basis. To evaluate whether referral criteria after surgery because of Dupuytren's disease were actually adhered to, and, to analyse differences in outcomes between patients who were referred according to the criteria (correctly referred) and those who were not referred but should have been (incorrectly not referred). Referral pattern was evaluated prospectively in 46 patients. Total active/passive range of joint motion (TAM/ TPM), sensibility, pinch force, Disability Arm Shoulder Hand questionnaire (DASH) and Michigan Hand outcomes Questionnaire (MHQ) were used as outcome measures preoperatively and 10 months postoperatively. In total 21 patients were referred correctly and 17 patients were incorrectly not referred. Significant improvements on TAM/TPM, DASH and MHQ were found at follow-up for the total group. No differences in outcomes were found between patients correctly referred and patients incorrectly not referred for postoperative hand therapy. Referral criteria were not adhered to. Given the lack of differences in outcomes between patients correctly referred and patients incorrectly not referred, postoperative hand therapy in Dupuytren's disease should be reconsidered.
Research Waste: How Are Dental Survival Articles Indexed and Reported?
Layton, Danielle M; Clarke, Michael
2016-01-01
Research waste occurs when research is ignored, cannot be found, cannot be used, or is unintentionally repeated. This article aims to investigate how dental survival analyses were indexed and reported, and to discuss whether errors in indexing and writing articles are affecting identification and use of survival articles, contributing to research waste. Articles reporting survival of dental prostheses in humans (also known as time-to-event) were identified by searching 50 dental journals that had the highest Impact Factor in 2008. These journals were hand searched twice (Kappa 0.92), and the articles were assessed by two independent reviewers (Kappa 0.86) to identify dental survival articles ("case" articles, n = 95), likely false positives (active controls, n = 91), and all other true negative articles (passive controls, n = 6,769). This means that the study used a case:control method. Once identified, the different groups of articles were assessed and compared. Allocation of medical subject headings (MeSH) by MEDLINE indexers that related to survival was sought, use of words by authors in the abstract and title that related to survival was identified, and use of words and figures by authors that related to survival in the articles themselves was also sought. Differences were assessed with chi-square and Fisher's Exact statistics. Reporting quality was also assessed. The results were reviewed to discuss their potential impact on research waste. Allocation of survival-related MeSH index terms across the three article groups was inconsistent and inaccurate. Statistical MeSH had not been allocated to 30% of the dental survival "case" articles and had been incorrectly allocated to 15% of active controls. Additionally, information reported by authors in titles and abstracts varied, with only two-thirds of survival "case" articles mentioning survival "statistics" in the abstract. In the articles themselves, time-to-event statistical methods, survival curves, and life tables were poorly reported or constructed. Overall, the low quality of indexing by indexers and reporting by authors means that these articles will not be readily identifiable through electronic searches, and, even if they are found, the poor reporting quality makes it unnecessarily difficult for readers to understand and use them. There are substantial problems with the reporting of time-to-event analyses in the dental literature. These problems will adversely impact how these articles can be found and used, thereby contributing to research waste. Changes are needed in the way that authors report these studies and the way indexers classify them.
Erratum: "A Smaller Radius for the Transiting Exoplanet WASP-10b" (2009, ApJ, 692, L100)
NASA Astrophysics Data System (ADS)
Johnson, John Asher; Winn, Joshua N.; Cabrera, Nicole E.; Carter, Joshua A.
2010-03-01
We have identified an error in our Heliocentric Julian Dates (HJDs) of observation caused by incorrect input to the code used to convert from JD to HJD. The times in Table 1 have been corrected by adding 0.006382 day to each entry in the original Column 1. Similarly, the measured mid-transit time in Table 2 has been changed to Tc = 2454664.037295. We also note that the header in Column 1 of Table 1 is incorrect. The label should read HJD, rather than BJD. The updated Tables 1 and 2 have been included herein. This error has no impact on our main conclusions. We thank Pedro Valdes Sada and Gracjan Maciejewski for pointing out the incorrect mid-transit time.
Parks, David R; Roederer, Mario; Moore, Wayne A
2006-06-01
In immunofluorescence measurements and most other flow cytometry applications, fluorescence signals of interest can range down to essentially zero. After fluorescence compensation, some cell populations will have low means and include events with negative data values. Logarithmic presentation has been very useful in providing informative displays of wide-ranging flow cytometry data, but it fails to adequately display cell populations with low means and high variances and, in particular, offers no way to include negative data values. This has led to a great deal of difficulty in interpreting and understanding flow cytometry data, has often resulted in incorrect delineation of cell populations, and has led many people to question the correctness of compensation computations that were, in fact, correct. We identified a set of criteria for creating data visualization methods that accommodate the scaling difficulties presented by flow cytometry data. On the basis of these, we developed a new data visualization method that provides important advantages over linear or logarithmic scaling for display of flow cytometry data, a scaling we refer to as "Logicle" scaling. Logicle functions represent a particular generalization of the hyperbolic sine function with one more adjustable parameter than linear or logarithmic functions. Finally, we developed methods for objectively and automatically selecting an appropriate value for this parameter. The Logicle display method provides more complete, appropriate, and readily interpretable representations of data that includes populations with low-to-zero means, including distributions resulting from fluorescence compensation procedures, than can be produced using either logarithmic or linear displays. The method includes a specific algorithm for evaluating actual data distributions and deriving parameters of the Logicle scaling function appropriate for optimal display of that data. It is critical to note that Logicle visualization does not change the data values or the descriptive statistics computed from them. Copyright 2006 International Society for Analytical Cytology.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
NASA Astrophysics Data System (ADS)
Gerlich, Gerhard; Tscheuschner, Ralf D.
It is shown that the notorious claim by Halpern et al. recently repeated in their comment that the method, logic, and conclusions of our "Falsification Of The CO2 Greenhouse Effects Within The Frame Of Physics" would be in error has no foundation. Since Halpern et al. communicate our arguments incorrectly, their comment is scientifically vacuous. In particular, it is not true that we are "trying to apply the Clausius statement of the Second Law of Thermodynamics to only one side of a heat transfer process rather than the entire process" and that we are "systematically ignoring most non-radiative heat flows applicable to Earth's surface and atmosphere". Rather, our falsification paper discusses the violation of fundamental physical and mathematical principles in 14 examples of common pseudo-derivations of fictitious greenhouse effects that are all based on simplistic pictures of radiative transfer and their obscure relation to thermodynamics, including but not limited to those descriptions (a) that define a "Perpetuum Mobile Of The 2nd Kind", (b) that rely on incorrectly calculated averages of global temperatures, (c) that refer to incorrectly normalized spectra of electromagnetic radiation. Halpern et al. completely missed an exceptional chance to formulate a scientifically well-founded antithesis. They do not even define a greenhouse effect that they wish to defend. We take the opportunity to clarify some misunderstandings, which are communicated in the current discussion on the non-measurable, i.e., physically non-existing influence of the trace gas CO2 on the climates of the Earth.
Pediatric Surgeon-Directed Wound Classification Improves Accuracy
Zens, Tiffany J.; Rusy, Deborah A.; Gosain, Ankush
2015-01-01
Background Surgical wound classification (SWC) communicates the degree of contamination in the surgical field and is used to stratify risk of surgical site infection and compare outcomes amongst centers. We hypothesized that changing from nurse-directed to surgeon-directed SWC during a structured operative debrief we will improve accuracy of documentation. Methods An IRB-approved retrospective chart review was performed. Two time periods were defined: initially, SWC was determined and recorded by the circulating nurse (Pre-Debrief 6/2012-5/2013) and allowing six months for adoption and education, we implemented a structured operative debriefing including surgeon-directed SWC (Post-Debrief 1/2014-8/2014). Accuracy of SWC was determined for four commonly performed Pediatric General Surgery operations: inguinal hernia repair (clean), gastrostomy +/− Nissen fundoplication (clean-contaminated), appendectomy without perforation (contaminated), and appendectomy with perforation (dirty). Results 183 cases Pre-Debrief and 142 cases Post-Debrief met inclusion criteria. No differences between time periods were noted in regards to patient demographics, ASA class, or case mix. Accuracy of wound classification improved Post-Debrief (42% vs. 58.5%, p=0.003). Pre-Debrief, 26.8% of cases were overestimated or underestimated by more than one wound class, vs. 3.5% of cases Post-Debrief (p<0.001). Interestingly, the majority of Post-Debrief contaminated cases were incorrectly classified as clean-contaminated. Conclusions Implementation of a structured operative debrief including surgeon-directed SWC improves the percentage of correctly classified wounds and decreases the degree of inaccuracy in incorrectly classified cases. However, following implementation of the debriefing, we still observed a 41.5% rate of incorrect documentation, most notably in contaminated cases, indicating further education and process improvement is needed. PMID:27020829
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David
2017-01-01
Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343
2010-04-01
the most active aminoglycoside (27.1% of isolates were susceptible ). Disk diffusion and Etest tended to be more accurate than the Vitek 2 , Phoenix...and MicroScan automated systems; but errors were noted with all methods. The Vitek 2 instrument incorrectly reported that more than one-third of the...Acinetobacter, we have observed in clinical practice at the San Antonio Mil- itary Medical Center results of susceptibility to amikacin from the Vitek 2
USDA-ARS?s Scientific Manuscript database
The coauthors of previously published work correct details from a 2008 publication. Specifically, it was incorrectly indicated in the methods section for data presented in Tables 2 and 3 that this experiment was the result of three replicates. These data were not the result of three replicate experi...
Forster, B; Ropohl, D; Raule, P
1977-07-05
The manual examination of rigor mortis as currently used and its often subjective evaluation frequently produced highly incorrect deductions. It is therefore desirable that such inaccuracies should be replaced by the objective measuring of rigor mortis at the extremities. To that purpose a method is described which can also be applied in on-the-spot investigations and a new formula for the determination of rigor mortis--indices (FRR) is introduced.
Estimates of the magnitudes of major marine mass extinctions in earth history
2016-01-01
Procedures introduced here make it possible, first, to show that background (piecemeal) extinction is recorded throughout geologic stages and substages (not all extinction has occurred suddenly at the ends of such intervals); second, to separate out background extinction from mass extinction for a major crisis in earth history; and third, to correct for clustering of extinctions when using the rarefaction method to estimate the percentage of species lost in a mass extinction. Also presented here is a method for estimating the magnitude of the Signor–Lipps effect, which is the incorrect assignment of extinctions that occurred during a crisis to an interval preceding the crisis because of the incompleteness of the fossil record. Estimates for the magnitudes of mass extinctions presented here are in most cases lower than those previously published. They indicate that only ∼81% of marine species died out in the great terminal Permian crisis, whereas levels of 90–96% have frequently been quoted in the literature. Calculations of the latter numbers were incorrectly based on combined data for the Middle and Late Permian mass extinctions. About 90 orders and more than 220 families of marine animals survived the terminal Permian crisis, and they embodied an enormous amount of morphological, physiological, and ecological diversity. Life did not nearly disappear at the end of the Permian, as has often been claimed. PMID:27698119
Estimates of the magnitudes of major marine mass extinctions in earth history
NASA Astrophysics Data System (ADS)
Stanley, Steven M.
2016-10-01
Procedures introduced here make it possible, first, to show that background (piecemeal) extinction is recorded throughout geologic stages and substages (not all extinction has occurred suddenly at the ends of such intervals); second, to separate out background extinction from mass extinction for a major crisis in earth history; and third, to correct for clustering of extinctions when using the rarefaction method to estimate the percentage of species lost in a mass extinction. Also presented here is a method for estimating the magnitude of the Signor-Lipps effect, which is the incorrect assignment of extinctions that occurred during a crisis to an interval preceding the crisis because of the incompleteness of the fossil record. Estimates for the magnitudes of mass extinctions presented here are in most cases lower than those previously published. They indicate that only ˜81% of marine species died out in the great terminal Permian crisis, whereas levels of 90-96% have frequently been quoted in the literature. Calculations of the latter numbers were incorrectly based on combined data for the Middle and Late Permian mass extinctions. About 90 orders and more than 220 families of marine animals survived the terminal Permian crisis, and they embodied an enormous amount of morphological, physiological, and ecological diversity. Life did not nearly disappear at the end of the Permian, as has often been claimed.
Blood specimen labelling errors: Implications for nephrology nursing practice.
Duteau, Jennifer
2014-01-01
Patient safety is the foundation of high-quality health care, as recognized both nationally and worldwide. Patient blood specimen identification is critical in ensuring the delivery of safe and appropriate care. The practice of nephrology nursing involves frequent patient blood specimen withdrawals to treat and monitor kidney disease. A critical review of the literature reveals that incorrect patient identification is one of the major causes of blood specimen labelling errors. Misidentified samples create a serious risk to patient safety leading to multiple specimen withdrawals, delay in diagnosis, misdiagnosis, incorrect treatment, transfusion reactions, increased length of stay and other negative patient outcomes. Barcode technology has been identified as a preferred method for positive patient identification leading to a definitive decrease in blood specimen labelling errors by as much as 83% (Askeland, et al., 2008). The use of a root cause analysis followed by an action plan is one approach to decreasing the occurrence of blood specimen labelling errors. This article will present a review of the evidence-based literature surrounding blood specimen labelling errors, followed by author recommendations for completing a root cause analysis and action plan. A failure modes and effects analysis (FMEA) will be presented as one method to determine root cause, followed by the Ottawa Model of Research Use (OMRU) as a framework for implementation of strategies to reduce blood specimen labelling errors.
Estimates of the magnitudes of major marine mass extinctions in earth history.
Stanley, Steven M
2016-10-18
Procedures introduced here make it possible, first, to show that background (piecemeal) extinction is recorded throughout geologic stages and substages (not all extinction has occurred suddenly at the ends of such intervals); second, to separate out background extinction from mass extinction for a major crisis in earth history; and third, to correct for clustering of extinctions when using the rarefaction method to estimate the percentage of species lost in a mass extinction. Also presented here is a method for estimating the magnitude of the Signor-Lipps effect, which is the incorrect assignment of extinctions that occurred during a crisis to an interval preceding the crisis because of the incompleteness of the fossil record. Estimates for the magnitudes of mass extinctions presented here are in most cases lower than those previously published. They indicate that only ∼81% of marine species died out in the great terminal Permian crisis, whereas levels of 90-96% have frequently been quoted in the literature. Calculations of the latter numbers were incorrectly based on combined data for the Middle and Late Permian mass extinctions. About 90 orders and more than 220 families of marine animals survived the terminal Permian crisis, and they embodied an enormous amount of morphological, physiological, and ecological diversity. Life did not nearly disappear at the end of the Permian, as has often been claimed.
Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding
NASA Astrophysics Data System (ADS)
Oh, Kwan-Jung; Oh, Byung Tae
2015-04-01
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
Pezzoli, Lorenzo; Pineda, Silvia; Halkyer, Percy; Crespo, Gladys; Andrews, Nick; Ronveaux, Olivier
2009-03-01
To estimate the yellow fever (YF) vaccine coverage for the endemic and non-endemic areas of Bolivia and to determine whether selected districts had acceptable levels of coverage (>70%). We conducted two surveys of 600 individuals (25 x 12 clusters) to estimate coverage in the endemic and non-endemic areas. We assessed 11 districts using lot quality assurance sampling (LQAS). The lot (district) sample was 35 individuals with six as decision value (alpha error 6% if true coverage 70%; beta error 6% if true coverage 90%). To increase feasibility, we divided the lots into five clusters of seven individuals; to investigate the effect of clustering, we calculated alpha and beta by conducting simulations where each cluster's true coverage was sampled from a normal distribution with a mean of 70% or 90% and standard deviations of 5% or 10%. Estimated coverage was 84.3% (95% CI: 78.9-89.7) in endemic areas, 86.8% (82.5-91.0) in non-endemic and 86.0% (82.8-89.1) nationally. LQAS showed that four lots had unacceptable coverage levels. In six lots, results were inconsistent with the estimated administrative coverage. The simulations suggested that the effect of clustering the lots is unlikely to have significantly increased the risk of making incorrect accept/reject decisions. Estimated YF coverage was high. Discrepancies between administrative coverage and LQAS results may be due to incorrect population data. Even allowing for clustering in LQAS, the statistical errors would remain low. Catch-up campaigns are recommended in districts with unacceptable coverage.
Comparison of thawing and freezing dark energy parametrizations
NASA Astrophysics Data System (ADS)
Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.
2016-05-01
Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.
Avilés, J M; Soler, J J
2010-01-01
We have recently published support to the hypothesis that visual systems of parents could affect nestling detectability and, consequently, influences the evolution of nestling colour designs in altricial birds. We provided comparative evidence of an adjustment of nestling colour designs to the visual system of parents that we have found in a comparative study on 22 altricial bird species. In this issue, however, Renoult et al. (J. Evol. Biol., 2009) question some of the assumptions and statistical approaches in our study. Their argumentation relied on two major points: (1) an incorrect assignment of vision system to four out of 22 sampled species in our study; and (2) the use of an incorrect approach for phylogenetic correction of the predicted associations. Here, we discuss in detail re-assignation of vision systems in that study and propose alternative interpretation for current knowledge on spectrophotometric data of avian pigments. We reanalysed the data by using phylogenetic generalized least squares analyses that account for the alluded limitations of phylogenetically independent contrasts and, in accordance with the hypothesis, confirmed a significant influence of parental visual system on gape coloration. Our results proved to be robust to the assumptions on visual system evolution for Laniidae and nocturnal owls that Renoult et al. (J. Evol. Biol., 2009) study suggested may have flawed our early findings. Thus, the hypothesis that selection has resulted in increased detectability of nestling by adjusting gape coloration to parental visual systems is currently supported by our comparative data.
Coupled-oscillator theory of dispersion and Casimir-Polder interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, P. R.; Ford, G. W.; Milonni, P. W.
2014-10-28
We address the question of the applicability of the argument theorem (of complex variable theory) to the calculation of two distinct energies: (i) the first-order dispersion interaction energy of two separated oscillators, when one of the oscillators is excited initially and (ii) the Casimir-Polder interaction of a ground-state quantum oscillator near a perfectly conducting plane. We show that the argument theorem can be used to obtain the generally accepted equation for the first-order dispersion interaction energy, which is oscillatory and varies as the inverse power of the separation r of the oscillators for separations much greater than an optical wavelength.more » However, for such separations, the interaction energy cannot be transformed into an integral over the positive imaginary axis. If the argument theorem is used incorrectly to relate the interaction energy to an integral over the positive imaginary axis, the interaction energy is non-oscillatory and varies as r{sup −4}, a result found by several authors. Rather remarkably, this incorrect expression for the dispersion energy actually corresponds to the nonperturbative Casimir-Polder energy for a ground-state quantum oscillator near a perfectly conducting wall, as we show using the so-called “remarkable formula” for the free energy of an oscillator coupled to a heat bath [G. W. Ford, J. T. Lewis, and R. F. O’Connell, Phys. Rev. Lett. 55, 2273 (1985)]. A derivation of that formula from basic results of statistical mechanics and the independent oscillator model of a heat bath is presented.« less
ERIC Educational Resources Information Center
Loukusa, Soile; Leinonen, Eeva; Jussila, Katja; Mattila, Marja-Leena; Ryder, Nuala; Ebeling, Hanna; Moilanen, Irma
2007-01-01
This study examined irrelevant/incorrect answers produced by children with Asperger syndrome or high-functioning autism (7-9-year-olds and 10-12-year-olds) and normally developing children (7-9-year-olds). The errors produced were divided into three types: in Type 1, the child answered the original question incorrectly, in Type 2, the child gave a…
Analysis of the Effects of Fixed Costs on Learning Curve Calculations
1994-09-01
Gansler, Jacques S . The Defense Industry. Cambridge MA: MIT Press, 1980. 11. Horngren , Charles T. and George Foster. Cost Accounting : A Managerial...Incorrect Total Cost Estimates and Comparison to Correct/Correct Total C o st E stim a te s ...7 1 12. Incorrect/Correct Total Cost Estimates and Comparison to Correct/Correct Total C o st E stim a te s
Code of Federal Regulations, 2012 CFR
2012-04-01
... combination (name/TIN combination) is incorrect and the payor is required under paragraph (c)(3) of this section to identify that account as having the same name/TIN combination. After receiving a notice from... an account as having the incorrect name/TIN combination under paragraph (c)(3) of this section, the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... combination (name/TIN combination) is incorrect and the payor is required under paragraph (c)(3) of this section to identify that account as having the same name/TIN combination. After receiving a notice from... an account as having the incorrect name/TIN combination under paragraph (c)(3) of this section, the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... combination (name/TIN combination) is incorrect and the payor is required under paragraph (c)(3) of this section to identify that account as having the same name/TIN combination. After receiving a notice from... an account as having the incorrect name/TIN combination under paragraph (c)(3) of this section, the...
Code of Federal Regulations, 2011 CFR
2011-04-01
... combination (name/TIN combination) is incorrect and the payor is required under paragraph (c)(3) of this section to identify that account as having the same name/TIN combination. After receiving a notice from... an account as having the incorrect name/TIN combination under paragraph (c)(3) of this section, the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... combination (name/TIN combination) is incorrect and the payor is required under paragraph (c)(3) of this section to identify that account as having the same name/TIN combination. After receiving a notice from... an account as having the incorrect name/TIN combination under paragraph (c)(3) of this section, the...
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
Clarke, Damian L; Gall, Tamara M H; Thomson, Sandie R
2011-05-01
In the setting of the hypovolaemic patient with a thoraco-abdominal stab wound and potential injuries in both the chest and abdomen, deciding which cavity to explore first may be difficult.Opening the incorrect body cavity can delay control of tamponade or haemorrhage and exacerbate hypothermia and fluid shifts. This situation has been described as one of double jeopardy. All stab victims from July 2007 to July 2009 requiring a thoracotomy and laparotomy at the same operation were identified from a database. Demographics, site and nature of injuries, admission observations and investigations as well as operative sequence were recorded. Correct sequencing was defined as first opening the cavity with most lethal injury. Incorrect sequencing was defined as opening a cavity and finding either no injury or an injury of less severity than a simultaneous injury in the unopened cavity. The primary outcome was survival or death. Sixteen stab victims underwent thoracotomy and laparotomy during the same operation. All were male with an age range of 18–40 (mean/median 27). Median systolic blood pressure on presentation was 90 mm Hg. (quartile range 80–90 mm Hg). Median base excess was 6.5 (quartile range 12 to 2.2). All the deaths were the result of cardiac injuries. Incorrect sequencing occurred in four patients (25%). In this group there were four negative abdominal explorations prior to thoracotomy with two deaths. There was one death in the correct sequencing group. Incorrect sequencing in stab victims who require both thoracotomy and laparotomy at the same sitting is associated with a high mortality. This is especially true when the abdomen is incorrectly entered first whilst the life threatening pathology is in the chest. Clinical signs may be confusing, leading to incorrect sequencing of exploration. The common causes for confusion include failure to appreciate that cardiac tamponade does not present with bleeding and difficulty in assessing peritonism in an unstable patient with multiple stab wounds. In the setting of the unstable patient with stab wounds and suspected dual cavity injuries the chest should be opened first followed by the abdomen. 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
Getting a Valid Survey Response From 662 Plastic Surgeons in the 21st Century.
Reinisch, John F; Yu, Daniel C; Li, Wai-Yee
2016-01-01
Web-based surveys save time and money. As electronic questionnaires have increased in popularity, telephone and mailed surveys have declined. With any survey, a response rate of 75% or greater is critical for the validity of any study. We wanted to determine which survey method achieved the highest response among academic plastic surgeons. All American Association of Plastic Surgeons members were surveyed regarding authorship issues. They were randomly assigned to receive the questionnaire through 1 of 4 methods: (A) emailed with a link to an online survey; (B) regular mail; (C) regular mail + $1 bill, and (D) regular mail + $5 bill. Two weeks after the initial mailing, the number of responses was collected, and nonresponders were contacted to remind them to participate. The study was closed after 10 weeks. Survey costs were calculated based on the actual cost of sending the initial survey, including stationary, printing, postage (groups B-D), labor, and cost of any financial incentives. Cost of reminders to nonresponders was calculated at $5 per reminder, giving a total survey cost. Of 662 surveys sent, 54 were returned because of incorrect address/email, retirement, or death. Four hundred seventeen of the remaining 608 surveys were returned and analyzed. The response rate was lowest in the online group and highest in those mailed with a monetary incentive. Despite the convenience and low initial cost of web-based surveys, this generated the lowest response. We obtained statistically significant response rates (79% and 84%) only by using postal mail with monetary incentives and reminders. The inclusion of a $1 bill represented the greatest value and cost-effective survey method, based on cost per response.
Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil
2009-07-01
Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.
Wen, Kwun Wah; Hale, Gillian; Shafizadeh, Nafis; Hosseini, Mojgan; Huang, Anne; Kakar, Sanjay
2017-07-01
Goblet cell carcinoid (GCC) is staged and treated as adenocarcinoma (AC) and not as neuroendocrine tumor (NET) or neuroendocrine carcinoma. The term carcinoid may lead to incorrect interpretation as NET. The aim of the study was to explore pitfalls in staging and clinical interpretation of GCC and mixed GCC-AC, and propose strategies to avoid common errors. Diagnostic terminology, staging, and clinical interpretation were evaluated in 58 cases (27 GCCs, 31 mixed GCC-ACs). Opinions were collected from 23 pathologists using a survey. Clinical notes were reviewed to assess the interpretation of pathology diagnoses by oncologists. NET staging was incorrectly used for 25% of GCCs and 5% of mixed GCC-ACs. In the survey, 43% of pathologists incorrectly indicated that NET staging is applicable to GCCs, and 43% incorrectly responded that Ki-67 proliferation index is necessary for GCC grading. Two cases each of GCC and mixed GCC-AC were incorrectly interpreted as neuroendocrine neoplasms by oncologists, and platinum-based therapy was considered for 2 GCC-AC cases because of the mistaken impression of neuroendocrine carcinoma created by use of the World Health Organization 2010 term mixed adenoneuroendocrine carcinoma. The term carcinoid in GCC and use of mixed adenoneuroendocrine carcinoma for mixed GCC-AC lead to errors in staging and treatment. We propose that goblet cell carcinoid should be changed to goblet cell carcinoma, whereas GCC with AC should be referred to as mixed GCC-AC with a comment about the proportion of each component and the histologic subtype of AC. This terminology will facilitate appropriate staging and clinical management, and avoid errors in interpretation. Copyright © 2017 Elsevier Inc. All rights reserved.
Novotny, Tomas; Bond, Raymond; Andrsova, Irena; Koc, Lumir; Sisakova, Martina; Finlay, Dewar; Guldenring, Daniel; Spinar, Jindrich; Malik, Marek
2017-05-01
Most contemporary 12-lead electrocardiogram (ECG) devices offer computerized diagnostic proposals. The reliability of these automated diagnoses is limited. It has been suggested that incorrect computer advice can influence physician decision-making. This study analyzed the role of diagnostic proposals in the decision process by a group of fellows of cardiology and other internal medicine subspecialties. A set of 100 clinical 12-lead ECG tracings was selected covering both normal cases and common abnormalities. A team of 15 junior Cardiology Fellows and 15 Non-Cardiology Fellows interpreted the ECGs in 3 phases: without any diagnostic proposal, with a single diagnostic proposal (half of them intentionally incorrect), and with four diagnostic proposals (only one of them being correct) for each ECG. Self-rated confidence of each interpretation was collected. Availability of diagnostic proposals significantly increased the diagnostic accuracy (p<0.001). Nevertheless, in case of a single proposal (either correct or incorrect) the increase of accuracy was present in interpretations with correct diagnostic proposals, while the accuracy was substantially reduced with incorrect proposals. Confidence levels poorly correlated with interpretation scores (rho≈2, p<0.001). Logistic regression showed that an interpreter is most likely to be correct when the ECG offers a correct diagnostic proposal (OR=10.87) or multiple proposals (OR=4.43). Diagnostic proposals affect the diagnostic accuracy of ECG interpretations. The accuracy is significantly influenced especially when a single diagnostic proposal (either correct or incorrect) is provided. The study suggests that the presentation of multiple computerized diagnoses is likely to improve the diagnostic accuracy of interpreters. Copyright © 2017 Elsevier B.V. All rights reserved.
Prevalence of incorrect body posture in children and adolescents with overweight and obesity.
Maciałczyk-Paprocka, Katarzyna; Stawińska-Witoszyńska, Barbara; Kotwicki, Tomasz; Sowińska, Anna; Krzyżaniak, Alicja; Walkowiak, Jarosław; Krzywińska-Wiewiorowska, Małgorzata
2017-05-01
The ever increasing epidemics of overweight and obesity in school children may be one of the reasons of the growing numbers of children with incorrect body posture. The purpose of the study was the assessment of the prevalence of incorrect body posture in children and adolescents with overweight and obesity in Poznań, Poland. The population subject to study consisted of 2732 boys and girls aged 3-18 with obesity, overweight, and standard body mass. The assessment of body mass was performed based on BMI, adopting Cole's cutoff values. The evaluation of body posture was performed according to the postural error chart based on criteria complied by professor Dega. The prevalence rates of postural errors were significantly higher among children and adolescents with overweight and obesity than among the group with standard body mass. In the overweight group, it amounted to 69.2% and in the obese group to 78.6%. The most common postural deviations in obese children and adolescents were valgus knees and flat feet. Overweight and obesity in children and adolescents, predisposing to higher incidence of some types of postural errors, call for prevention programs addressing both health problems. What is Known: • The increase in the prevalence of overweight and obesity among children and adolescents has drawn attention to additional health complications which may occur in this population such as occurrence of incorrect body posture. What is New: • The modified chart of postural errors proved to be an effective tool in the assessment of incorrect body posture. • This chart may be used in the assessment of posture during screening tests and prevention actions at school.
Predictive error detection in pianists: a combined ERP and motion capture study
Maidhof, Clemens; Pitkäniemi, Anni; Tervaniemi, Mari
2013-01-01
Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP) studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70–100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one's own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists' fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile) feedback may provide the information necessary for the detection of an upcoming error. PMID:24133428
Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E
2014-09-16
A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.
Quantification of Stereochemical Communication in Metal-Organic Assemblies.
Castilla, Ana M; Miller, Mark A; Nitschke, Jonathan R; Smulders, Maarten M J
2016-08-26
The derivation and application of a statistical mechanical model to quantify stereochemical communication in metal-organic assemblies is reported. The factors affecting the stereochemical communication within and between the metal stereocenters of the assemblies were experimentally studied by optical spectroscopy and analyzed in terms of a free energy penalty per "incorrect" amine enantiomer incorporated, and a free energy of coupling between stereocenters. These intra- and inter-vertex coupling constants are used to track the degree of stereochemical communication across a range of metal-organic assemblies (employing different ligands, peripheral amines, and metals); temperature-dependent equilibria between diastereomeric cages are also quantified. The model thus provides a unified understanding of the factors that shape the chirotopic void spaces enclosed by metal-organic container molecules. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Leal, Yenny; Gonzalez-Abril, Luis; Lorencio, Carol; Bondia, Jorge; Vehi, Josep
2013-07-01
Support vector machines (SVMs) are an attractive option for detecting correct and incorrect measurements in real-time continuous glucose monitoring systems (RTCGMSs), because their learning mechanism can introduce a postprocessing strategy for imbalanced datasets. The proposed SVM considers the geometric mean to obtain a more balanced performance between sensitivity and specificity. To test this approach, 23 critically ill patients receiving insulin therapy were monitored over 72 h using an RTCGMS, and a dataset of 537 samples, classified according to International Standards Organization (ISO) criteria (372 correct and 165 incorrect measurements), was obtained. The results obtained were promising for patients with septic shock or with sepsis, for which the proposed system can be considered as reliable. However, this approach cannot be considered suitable for patients without sepsis.