ERIC Educational Resources Information Center
Tsai, Yea-Ru; Ouyang, Chen-Sen; Chang, Yukon
2016-01-01
The purpose of this study is to propose a diagnostic approach to identify engineering students' English reading comprehension errors. Student data were collected during the process of reading texts of English for science and technology on a web-based cumulative sentence analysis system. For the analysis, the association-rule, data mining technique…
Analysis of Children's Errors in Comprehension and Expression
ERIC Educational Resources Information Center
Hatcher, Ryan C.; Breaux, Kristina C.; Liu, Xiaochen; Bray, Melissa A.; Ottone-Cross, Karen L.; Courville, Troy; Luria, Sarah R.; Langley, Susan Dulong
2017-01-01
Children's oral language skills typically begin to develop sooner than their written language skills; however, the four language systems (listening, speaking, reading, and writing) then develop concurrently as integrated strands that influence one another. This research explored relationships between students' errors in language comprehension of…
Binocular optical axis parallelism detection precision analysis based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ying, Jiaju; Liu, Bingqi
2018-02-01
According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less
The Relationship among Correct and Error Oral Reading Rates and Comprehension.
ERIC Educational Resources Information Center
Roberts, Michael; Smith, Deborah Deutsch
1980-01-01
Eight learning disabled boys (10 to 12 years old) who were seriously deficient in both their oral reading and comprehension performances participated in the study which investigated, through an applied behavior analysis model, the interrelationships of three reading variables--correct oral reading rates, error oral reading rates, and percentage of…
Numeracy, Literacy and Newman's Error Analysis
ERIC Educational Resources Information Center
White, Allan Leslie
2010-01-01
Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…
Comprehensive analysis of a medication dosing error related to CPOE.
Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L
2005-01-01
This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.
A classification of errors in lay comprehension of medical documents.
Keselman, Alla; Smith, Catherine Arnott
2012-12-01
Emphasis on participatory medicine requires that patients and consumers participate in tasks traditionally reserved for healthcare providers. This includes reading and comprehending medical documents, often but not necessarily in the context of interacting with Personal Health Records (PHRs). Research suggests that while giving patients access to medical documents has many benefits (e.g., improved patient-provider communication), lay people often have difficulty understanding medical information. Informatics can address the problem by developing tools that support comprehension; this requires in-depth understanding of the nature and causes of errors that lay people make when comprehending clinical documents. The objective of this study was to develop a classification scheme of comprehension errors, based on lay individuals' retellings of two documents containing clinical text: a description of a clinical trial and a typical office visit note. While not comprehensive, the scheme can serve as a foundation of further development of a taxonomy of patients' comprehension errors. Eighty participants, all healthy volunteers, read and retold two medical documents. A data-driven content analysis procedure was used to extract and classify retelling errors. The resulting hierarchical classification scheme contains nine categories and 23 subcategories. The most common error made by the participants involved incorrectly recalling brand names of medications. Other common errors included misunderstanding clinical concepts, misreporting the objective of a clinical research study and physician's findings during a patient's visit, and confusing and misspelling clinical terms. A combination of informatics support and health education is likely to improve the accuracy of lay comprehension of medical documents. Published by Elsevier Inc.
ERIC Educational Resources Information Center
O'Connell, Ann Aileen
The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…
L2 Reading Comprehension and Its Correlates: A Meta-Analysis
ERIC Educational Resources Information Center
Jeon, Eun Hee; Yamashita, Junko
2014-01-01
The present meta-analysis examined the overall average correlation (weighted for sample size and corrected for measurement error) between passage-level second language (L2) reading comprehension and 10 key reading component variables investigated in the research domain. Four high-evidence correlates (with 18 or more accumulated effect sizes: L2…
Bosco, Francesca M; Angeleri, Romina; Sacco, Katiuscia; Bara, Bruno G
2015-01-01
The purpose of this study is to investigate the pragmatic abilities of individuals with traumatic brain injury (TBI). Several studies in the literature have previously reported communicative deficits in individuals with TBI, however such research has focused principally on communicative deficits in general, without providing an analysis of the errors committed in understanding and expressing communicative acts. Within the theoretical framework of Cognitive Pragmatics theory and Cooperative principle we focused on intermediate communicative errors that occur in both the comprehension and the production of various pragmatic phenomena, expressed through both linguistic and extralinguistic communicative modalities. To investigate the pragmatic abilities of individuals with TBI. A group of 30 individuals with TBI and a matched control group took part in the experiment. They were presented with a series of videotaped vignettes depicting everyday communicative exchanges, and were tested on the comprehension and production of various kinds of communicative acts (standard communicative act, deceit and irony). The participants' answers were evaluated as correct or incorrect. Incorrect answers were then further evaluated with regard to the presence of different intermediate errors. Individuals with TBI performed worse than control participants on all the tasks investigated when considering correct versus incorrect answers. Furthermore, a series of logistic regression analyses showed that group membership (TBI versus controls) significantly predicted the occurrence of intermediate errors. This result holds in both the comprehension and production tasks, and in both linguistic and extralinguistic modalities. Participants with TBI tend to have difficulty in managing different types of communicative acts, and they make more intermediate errors than the control participants. Intermediate errors concern the comprehension and production of the expression act, the comprehension of the actors' meaning, as well as the respect of the Cooperative principle. © 2014 Royal College of Speech and Language Therapists.
Ontological analysis of SNOMED CT.
Héja, Gergely; Surján, György; Varga, Péter
2008-10-27
SNOMED CT is the most comprehensive medical terminology. However, its use for intelligent services based on formal reasoning is questionable. The analysis of the structure of SNOMED CT is based on the formal top-level ontology DOLCE. The analysis revealed several ontological and knowledge-engineering errors, the most important are errors in the hierarchy (mostly from an ontological point of view, but also regarding medical aspects) and the mixing of subsumption relations with other types (mostly 'part of'). The found errors impede formal reasoning. The paper presents a possible way to correct these problems.
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Human Error and Commercial Aviation Accidents: A Comprehensive, Fine-Grained Analysis Using HFACS
2006-07-01
Factors Figure 2. The HFACS framework. 3 practiced and seemingly automatic behaviors is that they are particularly susceptible to attention and/or memory...been included in most error frameworks, the third and final error form, perceptual errors, has received comparatively less attention . No less...operate safely. After all, just as not everyone can play linebacker for their favorite professional football team or be a concert pianist , not
Errors of Inference in Structural Equation Modeling
ERIC Educational Resources Information Center
McCoach, D. Betsy; Black, Anne C.; O'Connell, Ann A.
2007-01-01
Although structural equation modeling (SEM) is one of the most comprehensive and flexible approaches to data analysis currently available, it is nonetheless prone to researcher misuse and misconceptions. This article offers a brief overview of the unique capabilities of SEM and discusses common sources of user error in drawing conclusions from…
NASA Astrophysics Data System (ADS)
Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang
2018-01-01
The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.
Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class
NASA Astrophysics Data System (ADS)
Novitasari, N.; Lukito, A.; Ekawati, R.
2018-01-01
A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.
The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.
ERIC Educational Resources Information Center
Dunivant, Noel
The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…
Measuring the Lense-Thirring precession using a second Lageos satellite
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Ciufolini, I.
1989-01-01
A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015
Computer-Assisted Training in the Comprehension of Authentic French Speech: A Closer View
ERIC Educational Resources Information Center
Hoeflaak, Arie
2004-01-01
In this article, the development of a computer-assisted listening comprehension project is described. First, we comment briefly on the points of departure, the need for autonomous learning against the background of recent changes in Dutch education, and the role of learning strategies. Then, an error analysis, the programs used for this project,…
The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency
ERIC Educational Resources Information Center
Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ
2012-01-01
This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 second and 974 third graders. Results found a significant relationship between error rate, oral reading fluency, and reading comprehension performance, and…
Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi
2018-06-13
The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.
Extraction and Analysis of Display Data
NASA Technical Reports Server (NTRS)
Land, Chris; Moye, Kathryn
2008-01-01
The Display Audit Suite is an integrated package of software tools that partly automates the detection of Portable Computer System (PCS) Display errors. [PCS is a lap top computer used onboard the International Space Station (ISS).] The need for automation stems from the large quantity of PCS displays (6,000+, with 1,000,000+ lines of command and telemetry data). The Display Audit Suite includes data-extraction tools, automatic error detection tools, and database tools for generating analysis spread sheets. These spread sheets allow engineers to more easily identify many different kinds of possible errors. The Suite supports over 40 independent analyses, 16 NASA Tech Briefs, November 2008 and complements formal testing by being comprehensive (all displays can be checked) and by revealing errors that are difficult to detect via test. In addition, the Suite can be run early in the development cycle to find and correct errors in advance of testing.
Metering error quantification under voltage and current waveform distortion
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran
2017-09-01
With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.
Medication safety--reliability of preference cards.
Dawson, Anthony; Orsini, Michael J; Cooper, Mary R; Wollenburg, Karol
2005-09-01
A CLINICAL ANALYSIS of surgeons' preference cards was initiated in one hospital as part of a comprehensive analysis to reduce medication-error risks by standardizing and simplifying the intraoperative medication-use process specific to the sterile field. THE PREFERENCE CARD ANALYSIS involved two subanalyses: a review of the information as it appeared on the cards and a failure mode and effects analysis of the process involved in using and maintaining the cards. THE ANALYSIS FOUND that the preference card system in use at this hospital is outdated. Variations and inconsistencies within the preference card system indicate that the use of preference cards as guides for medication selection for surgical procedures presents an opportunity for medication errors to occur.
Investigating Patterns of Errors for Specific Comprehension and Fluency Difficulties
ERIC Educational Resources Information Center
Koriakin, Taylor A.; Kaufman, Alan S.
2017-01-01
Although word reading has traditionally been viewed as a foundational skill for development of reading fluency and comprehension, some children demonstrate "specific" reading comprehension problems, in the context of intact word reading. The purpose of this study was to identify specific patterns of errors associated with reading…
Kandel, Himal; Khadka, Jyoti; Goggin, Michael; Pesudovs, Konrad
2017-12-01
This review has identified the best existing patient-reported outcome (PRO) instruments in refractive error. The article highlights the limitations of the existing instruments and discusses the way forward. A systematic review was conducted to identify the types of PROs used in refractive error, to determine the quality of the existing PRO instruments in terms of their psychometric properties, and to determine the limitations in the content of the existing PRO instruments. Articles describing a PRO instrument measuring 1 or more domains of quality of life in people with refractive error were identified by electronic searches on the MEDLINE, PubMed, Scopus, Web of Science, and Cochrane databases. The information on content development, psychometric properties, validity, reliability, and responsiveness of those PRO instruments was extracted from the selected articles. The analysis was done based on a comprehensive set of assessment criteria. One hundred forty-eight articles describing 47 PRO instruments in refractive error were included in the review. Most of the articles (99 [66.9%]) used refractive error-specific PRO instruments. The PRO instruments comprised 19 refractive, 12 vision but nonrefractive, and 16 generic PRO instruments. Only 17 PRO instruments were validated in refractive error populations; six of them were developed using Rasch analysis. None of the PRO instruments has items across all domains of quality of life. The Quality of Life Impact of Refractive Correction, the Quality of Vision, and the Contact Lens Impact on Quality of Life have comparatively better quality with some limitations, compared with the other PRO instruments. This review describes the PRO instruments and informs the choice of an appropriate measure in refractive error. We identified need of a comprehensive and scientifically robust refractive error-specific PRO instrument. Item banking and computer-adaptive testing system can be the way to provide such an instrument.
SU-E-T-88: Comprehensive Automated Daily QA for Hypo- Fractionated Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuinness, C; Morin, O
2014-06-01
Purpose: The trend towards more SBRT treatments with fewer high dose fractions places increased importance on daily QA. Patient plan specific QA with 3%/3mm gamma analysis and daily output constancy checks may not be enough to guarantee the level of accuracy required for SBRT treatments. But increasing the already extensive amount of QA procedures that are required is a daunting proposition. We performed a feasibility study for more comprehensive automated daily QA that could improve the diagnostic capabilities of QA without increasing workload. Methods: We performed the study on a Siemens Artiste linear accelerator using the integrated flat panel EPID.more » We included square fields, a picket fence, overlap and representative IMRT fields to measure output, flatness, symmetry, beam center, and percent difference from the standard. We also imposed a set of machine errors: MLC leaf position, machine output, and beam steering to compare with the standard. Results: Daily output was consistent within +/− 1%. Change in steering current by 1.4% and 2.4% resulted in a 3.2% and 6.3% change in flatness. 1 and 2mm MLC leaf offset errors were visibly obvious in difference plots, but passed a 3%/3mm gamma analysis. A simple test of transmission in a picket fence can catch a leaf offset error of a single leaf by 1mm. The entire morning QA sequence is performed in less than 30 minutes and images are automatically analyzed. Conclusion: Automated QA procedures could be used to provide more comprehensive information about the machine with less time and human involvement. We have also shown that other simple tests are better able to catch MLC leaf position errors than a 3%/3mm gamma analysis commonly used for IMRT and modulated arc treatments. Finally, this information could be used to watch trends of the machine and predict problems before they lead to costly machine downtime.« less
An empirical assessment of taxic paleobiology.
Adrain, J M; Westrop, S R
2000-07-07
The analysis of major changes in faunal diversity through time is a central theme of analytical paleobiology. The most important sources of data are literature-based compilations of stratigraphic ranges of fossil taxa. The levels of error in these compilations and the possible effects of such error have often been discussed but never directly assessed. We compared our comprehensive database of trilobites to the equivalent portion of J. J. Sepkoski Jr.'s widely used global genus database. More than 70% of entries in the global database are inaccurate; however, as predicted, the error is randomly distributed and does not introduce bias.
The virtual analyst program: automated data mining, error analysis, and reporting
W. Keith Moser; Mark H. Hansen; Patrick Miles; Ronald E. McRoberts
2007-01-01
The Forest Inventory and Analysis (FIA) program of the U.S. Department of Agriculture Forest Service conducts ongoing comprehensive inventories of the forest resources of the United States. The Northern Region FIA (NFIA) program has three tasks: (1) core reporting function, which produces the annual and 5-year inventory reports; (2) forest health measurements; and (3)...
A general model for attitude determination error analysis
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Seidewitz, ED; Nicholson, Mark
1988-01-01
An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.
Shim, Hyungsub; Hurley, Robert S; Rogalski, Emily; Mesulam, M-Marsel
2012-07-01
This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words, exception words and nonwords, were recorded. Error types were classified based on phonetic plausibility. In the first analysis, scores were evaluated by clinical diagnosis. Errors in spelling exception words and phonetically plausible errors were seen in PPA-S. Conversely, PPA-G was associated with errors in nonword spelling and phonetically implausible errors. In the next analysis, spelling scores were correlated to other neuropsychological language test scores. Significant correlations were found between exception word spelling and measures of naming and single word comprehension. Nonword spelling correlated with tests of grammar and repetition. Global language measures did not correlate significantly with spelling scores, however. Cortical thickness analysis based on MRI showed that atrophy in several language regions of interest were correlated with spelling errors. Atrophy in the left supramarginal gyrus and inferior frontal gyrus (IFG) pars orbitalis correlated with errors in nonword spelling, while thinning in the left temporal pole and fusiform gyrus correlated with errors in exception word spelling. Additionally, phonetically implausible errors in regular word spelling correlated with thinning in the left IFG pars triangularis and pars opercularis. Together, these findings suggest two independent systems for spelling to dictation, one phonetic (phoneme to grapheme conversion), and one lexical (whole word retrieval). Copyright © 2012 Elsevier Ltd. All rights reserved.
Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure
NASA Astrophysics Data System (ADS)
Santoso, D. A.; Farid, A.; Ulum, B.
2017-06-01
Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.
Tian, Zengshan; Xu, Kunjie; Yu, Xiang
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349
Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.
Effects of Error Correction on Word Recognition and Reading Comprehension.
ERIC Educational Resources Information Center
Jenkins, Joseph R.; And Others
1983-01-01
Two procedures for correcting oral reading errors, word supply and word drill, were examined to determine their effects on measures of word recognition and comprehension with 17 learning disabled elementary school students. (Author/SW)
Comprehending APA Style through Manuscript Analysis.
ERIC Educational Resources Information Center
Smith, Gabie E.; Eggleston, Tami J.
2001-01-01
Describes an activity designed to enhance undergraduate student comprehension of the American Psychological Association (APA) manual and style where students examined a poorly written paper for errors in relation to the APA guidelines. Reports the results of a study that tested the effectiveness of the activity. (CMK)
Roon, David A.; Waits, L.P.; Kendall, K.C.
2005-01-01
Non-invasive genetic sampling (NGS) is becoming a popular tool for population estimation. However, multiple NGS studies have demonstrated that polymerase chain reaction (PCR) genotyping errors can bias demographic estimates. These errors can be detected by comprehensive data filters such as the multiple-tubes approach, but this approach is expensive and time consuming as it requires three to eight PCR replicates per locus. Thus, researchers have attempted to correct PCR errors in NGS datasets using non-comprehensive error checking methods, but these approaches have not been evaluated for reliability. We simulated NGS studies with and without PCR error and 'filtered' datasets using non-comprehensive approaches derived from published studies and calculated mark-recapture estimates using CAPTURE. In the absence of data-filtering, simulated error resulted in serious inflations in CAPTURE estimates; some estimates exceeded N by ??? 200%. When data filters were used, CAPTURE estimate reliability varied with per-locus error (E??). At E?? = 0.01, CAPTURE estimates from filtered data displayed < 5% deviance from error-free estimates. When E?? was 0.05 or 0.09, some CAPTURE estimates from filtered data displayed biases in excess of 10%. Biases were positive at high sampling intensities; negative biases were observed at low sampling intensities. We caution researchers against using non-comprehensive data filters in NGS studies, unless they can achieve baseline per-locus error rates below 0.05 and, ideally, near 0.01. However, we suggest that data filters can be combined with careful technique and thoughtful NGS study design to yield accurate demographic information. ?? 2005 The Zoological Society of London.
Styck, Kara M; Walsh, Shana M
2016-01-01
The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Evidence for Implicit Learning in Syntactic Comprehension
ERIC Educational Resources Information Center
Fine, Alex B.; Jaeger, T. Florian
2013-01-01
This study provides evidence for implicit learning in syntactic comprehension. By reanalyzing data from a syntactic priming experiment (Thothathiri & Snedeker, 2008), we find that the error signal associated with a syntactic prime influences comprehenders' subsequent syntactic expectations. This follows directly from error-based implicit learning…
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Oyama, Yoshinori
2011-06-01
The present study examined Japanese university students' processing time for English subject and object relative clauses in relation to their English listening proficiency. In Analysis 1, the relation between English listening proficiency and reading span test scores was analyzed. The results showed that the high and low listening comprehension groups' reading span test scores do not differ. Analysis 2 investigated English listening proficiency and processing time for sentences with subject and object relative clauses. The results showed that reading the relative clause ending and the main verb section of a sentence with an object relative clause (such as "attacked" and "admitted" in the sentence "The reporter that the senator attacked admitted the error") takes less time for learners with high English listening scores than for learners with low English listening scores. In Analysis 3, English listening proficiency and comprehension accuracy for sentences with subject and object relative clauses were examined. The results showed no significant difference in comprehension accuracy between the high and low listening-comprehension groups. These results indicate that processing time for English relative clauses is related to the cognitive processes involved in listening comprehension, which requires immediate processing of syntactically complex audio information.
Tedja, Milly S; Wojciechowski, Robert; Hysi, Pirro G; Eriksson, Nicholas; Furlotte, Nicholas A; Verhoeven, Virginie J M; Iglesias, Adriana I; Meester-Smoor, Magda A; Tompson, Stuart W; Fan, Qiao; Khawaja, Anthony P; Cheng, Ching-Yu; Höhn, René; Yamashiro, Kenji; Wenocur, Adam; Grazal, Clare; Haller, Toomas; Metspalu, Andres; Wedenoja, Juho; Jonas, Jost B; Wang, Ya Xing; Xie, Jing; Mitchell, Paul; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Paterson, Andrew D; Hosseini, S Mohsen; Shah, Rupal L; Williams, Cathy; Teo, Yik Ying; Tham, Yih Chung; Gupta, Preeti; Zhao, Wanting; Shi, Yuan; Saw, Woei-Yuh; Tai, E-Shyong; Sim, Xue Ling; Huffman, Jennifer E; Polašek, Ozren; Hayward, Caroline; Bencic, Goran; Rudan, Igor; Wilson, James F; Joshi, Peter K; Tsujikawa, Akitaka; Matsuda, Fumihiko; Whisenhunt, Kristina N; Zeller, Tanja; van der Spek, Peter J; Haak, Roxanna; Meijers-Heijboer, Hanne; van Leeuwen, Elisabeth M; Iyengar, Sudha K; Lass, Jonathan H; Hofman, Albert; Rivadeneira, Fernando; Uitterlinden, André G; Vingerling, Johannes R; Lehtimäki, Terho; Raitakari, Olli T; Biino, Ginevra; Concas, Maria Pina; Schwantes-An, Tae-Hwi; Igo, Robert P; Cuellar-Partida, Gabriel; Martin, Nicholas G; Craig, Jamie E; Gharahkhani, Puya; Williams, Katie M; Nag, Abhishek; Rahi, Jugnoo S; Cumberland, Phillippa M; Delcourt, Cécile; Bellenguez, Céline; Ried, Janina S; Bergen, Arthur A; Meitinger, Thomas; Gieger, Christian; Wong, Tien Yin; Hewitt, Alex W; Mackey, David A; Simpson, Claire L; Pfeiffer, Norbert; Pärssinen, Olavi; Baird, Paul N; Vitart, Veronique; Amin, Najaf; van Duijn, Cornelia M; Bailey-Wilson, Joan E; Young, Terri L; Saw, Seang-Mei; Stambolian, Dwight; MacGregor, Stuart; Guggenheim, Jeremy A; Tung, Joyce Y; Hammond, Christopher J; Klaver, Caroline C W
2018-06-01
Refractive errors, including myopia, are the most frequent eye disorders worldwide and an increasingly common cause of blindness. This genome-wide association meta-analysis in 160,420 participants and replication in 95,505 participants increased the number of established independent signals from 37 to 161 and showed high genetic correlation between Europeans and Asians (>0.78). Expression experiments and comprehensive in silico analyses identified retinal cell physiology and light processing as prominent mechanisms, and also identified functional contributions to refractive-error development in all cell types of the neurosensory retina, retinal pigment epithelium, vascular endothelium and extracellular matrix. Newly identified genes implicate novel mechanisms such as rod-and-cone bipolar synaptic neurotransmission, anterior-segment morphology and angiogenesis. Thirty-one loci resided in or near regions transcribing small RNAs, thus suggesting a role for post-transcriptional regulation. Our results support the notion that refractive errors are caused by a light-dependent retina-to-sclera signaling cascade and delineate potential pathobiological molecular drivers.
A stochastic dynamic model for human error analysis in nuclear power plants
NASA Astrophysics Data System (ADS)
Delgado-Loperena, Dharma
Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine
NASA Astrophysics Data System (ADS)
Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo
2017-07-01
Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Statistical image quantification toward optimal scan fusion and change quantification
NASA Astrophysics Data System (ADS)
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
MetaGenyo: a web tool for meta-analysis of genetic association studies.
Martorell-Marugan, Jordi; Toro-Dominguez, Daniel; Alarcon-Riquelme, Marta E; Carmona-Saez, Pedro
2017-12-16
Genetic association studies (GAS) aims to evaluate the association between genetic variants and phenotypes. In the last few years, the number of this type of study has increased exponentially, but the results are not always reproducible due to experimental designs, low sample sizes and other methodological errors. In this field, meta-analysis techniques are becoming very popular tools to combine results across studies to increase statistical power and to resolve discrepancies in genetic association studies. A meta-analysis summarizes research findings, increases statistical power and enables the identification of genuine associations between genotypes and phenotypes. Meta-analysis techniques are increasingly used in GAS, but it is also increasing the amount of published meta-analysis containing different errors. Although there are several software packages that implement meta-analysis, none of them are specifically designed for genetic association studies and in most cases their use requires advanced programming or scripting expertise. We have developed MetaGenyo, a web tool for meta-analysis in GAS. MetaGenyo implements a complete and comprehensive workflow that can be executed in an easy-to-use environment without programming knowledge. MetaGenyo has been developed to guide users through the main steps of a GAS meta-analysis, covering Hardy-Weinberg test, statistical association for different genetic models, analysis of heterogeneity, testing for publication bias, subgroup analysis and robustness testing of the results. MetaGenyo is a useful tool to conduct comprehensive genetic association meta-analysis. The application is freely available at http://bioinfo.genyo.es/metagenyo/ .
Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.
Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick
2016-02-01
Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.
Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging
Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.
2017-01-01
Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049
NASA Astrophysics Data System (ADS)
Sumule, U.; Amin, S. M.; Fuad, Y.
2018-01-01
This study aims to determine the types and causes of errors, as well as efforts being attempted to overcome the mistakes made by junior high school students in completing PISA content space and shape. Two subjects were selected based on the mathematical ability test results with the most error, yet they are able to communicate orally and in writing. Two selected subjects then worked on the PISA ability test question and the subjects were interviewed to find out the type and cause of the error and then given a scaffolding based on the type of mistake made.The results of this study obtained the type of error that students do are comprehension and transformation error. The reasons are students was not able to identify the keywords in the question, write down what is known or given, specify formulas or device a plan. To overcome this error, students were given scaffolding. Scaffolding that given to overcome misunderstandings were reviewing and restructuring. While to overcome the transformation error, scaffolding given were reviewing, restructuring, explaining and developing representational tools. Teachers are advised to use scaffolding to resolve errors so that the students are able to avoid these errors.
Laser-fluorescence measurement of marine algae
NASA Technical Reports Server (NTRS)
Browell, E. V.
1980-01-01
Progress in remote sensing of algae by laser-induced fluorescence is subject of comprehensive report. Existing single-wavelength and four-wavelength systems are reviewed, and new expression for power received by airborne sensor is derived. Result differs by as much as factor of 10 from those previously reported. Detailed error analysis evluates factors affecting accuracy of laser-fluorosensor systems.
Report on Automated Semantic Analysis of Scientific and Engineering Codes
NASA Technical Reports Server (NTRS)
Stewart. Maark E. M.; Follen, Greg (Technical Monitor)
2001-01-01
The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.
Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.
2011-01-01
The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106
Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N
2011-01-01
The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.
NASA Astrophysics Data System (ADS)
Altan, O.; Kemper, G.
2012-07-01
The GIS based analysis of the land use change of Istanbul delivers a huge and comprehensive database that can be used for further analysis. Trend analysis and scenarios enable a view to the future that highlights the needs for a proper planning. Also the understanding via comparison to other cities assists in order not to copy errors from other cities. GIS in combination with ancillary data open a wide field for managing the future of Istanbul.
Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G
2007-10-01
Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.
Mirman, Daniel; Zhang, Yongsheng; Wang, Ze; Coslett, H. Branch; Schwartz, Myrna F.
2015-01-01
Theories about the architecture of language processing differ with regard to whether verbal and nonverbal comprehension share a functional and neural substrate and how meaning extraction in comprehension relates to the ability to use meaning to drive verbal production. We (re-)evaluate data from 17 cognitive-linguistic performance measures of 99 participants with chronic aphasia using factor analysis to establish functional components and support vector regression-based lesion-symptom mapping to determine the neural correlates of deficits on these functional components. The results are highly consistent with our previous findings: production of semantic errors is behaviorally and neuroanatomically distinct from verbal and nonverbal comprehension. Semantic errors were most strongly associated with left ATL damage whereas deficits on tests of verbal and non-verbal semantic recognition were most strongly associated with damage to deep white matter underlying the frontal lobe at the confluence of multiple tracts, including the inferior fronto-occipital fasciculus, the uncinate fasciculus, and the anterior thalamic radiations. These results suggest that traditional views based on grey matter hub(s) for semantic processing are incomplete and that the role of white matter in semantic cognition has been underappreciated. PMID:25681739
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal
2016-05-15
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds intomore » the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.« less
Healing assessment of tile sets for error tolerance in DNA self-assembly.
Hashempour, M; Mashreghian Arani, Z; Lombardi, F
2008-12-01
An assessment of the effectiveness of healing for error tolerance in DNA self-assembly tile sets for algorithmic/nano-manufacturing applications is presented. Initially, the conditions for correct binding of a tile to an existing aggregate are analysed using a Markovian approach; based on this analysis, it is proved that correct aggregation (as identified with a so-called ideal tile set) is not always met for the existing tile sets for nano-manufacturing. A metric for assessing tile sets for healing by utilising punctures is proposed. Tile sets are investigated and assessed with respect to features such as error (mismatched tile) movement, punctured area and bond types. Subsequently, it is shown that the proposed metric can comprehensively assess the healing effectiveness of a puncture type for a tile set and its capability to attain error tolerance for the desired pattern. Extensive simulation results are provided.
2011-01-01
Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484
On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology
NASA Astrophysics Data System (ADS)
Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-08-01
We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.
Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.
Baldwin, Abigail; Rodriguez, Elizabeth S
2016-02-01
The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.
Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, J.; Whitmore, J.; Blair, N.
2014-08-01
This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% formore » all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.« less
The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.
NASA Technical Reports Server (NTRS)
Russell, J. M., III; Drayson, S. R.
1972-01-01
Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.
Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M
2002-01-01
Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.
Space shuttle post-entry and landing analysis. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Crawford, B. S.; Duiven, E. M.
1973-01-01
Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.
Developing a model for the adequate description of electronic communication in hospitals.
Saboor, Samrend; Ammenwerth, Elske
2011-01-01
Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
Irony and proverb comprehension in schizophrenia: do female patients "dislike" ironic remarks?
Rapp, Alexander M; Langohr, Karin; Mutschler, Dorothee E; Wild, Barbara
2014-01-01
Difficulties in understanding irony and sarcasm are part of the social cognition deficits in patients with schizophrenia. A number of studies have reported higher error rates during comprehension in patients with schizophrenia. However, the relationships of these impairments to schizotypal personality traits and other language deficits, such as the comprehension of proverbs, are unclear. We investigated irony and proverb comprehension in an all-female sample of 20 schizophrenia patients and 27 matched controls. Subjects indicated if a statement was intended to be ironic, literal, or meaningless and furthermore rated the meanness and funniness of the stimuli and certainty of their decision. Patients made significantly more errors than controls did. Globally, there were no overall differences in the ratings. However, patients rated the subgroup of stimuli with answers given incorrectly as having significantly less meanness and in case of an error indicated a significantly higher certainty than controls. Across all of the study participants, performances in irony (r = -0.51) and proverb (r = 0.56) comprehension were significantly correlated with schizotypal personality traits, suggesting a continuum of nonliteral language understanding. Because irony is so frequent in everyday conversations, this makes irony an especially promising candidate for social cognition training in schizophrenia.
Analysis of laser fluorosensor systems for remote algae detection and quantification
NASA Technical Reports Server (NTRS)
Browell, E. V.
1977-01-01
The development and performance of single- and multiple-wavelength laser fluorosensor systems for use in the remote detection and quantification of algae are discussed. The appropriate equation for the fluorescence power received by a laser fluorosensor system is derived in detail. Experimental development of a single wavelength system and a four wavelength system, which selectively excites the algae contained in the four primary algal color groups, is reviewed, and test results are presented. A comprehensive error analysis is reported which evaluates the uncertainty in the remote determination of the chlorophyll a concentration contained in algae by single- and multiple-wavelength laser fluorosensor systems. Results of the error analysis indicate that the remote quantification of chlorophyll a by a laser fluorosensor system requires optimum excitation wavelength(s), remote measurement of marine attenuation coefficients, and supplemental instrumentation to reduce uncertainties in the algal fluorescence cross sections.
Kaldjian, Lauris C; Jones, Elizabeth W; Rosenthal, Gary E; Tripp-Reimer, Toni; Hillis, Stephen L
2006-01-01
BACKGROUND Physician disclosure of medical errors to institutions, patients, and colleagues is important for patient safety, patient care, and professional education. However, the variables that may facilitate or impede disclosure are diverse and lack conceptual organization. OBJECTIVE To develop an empirically derived, comprehensive taxonomy of factors that affects voluntary disclosure of errors by physicians. DESIGN A mixed-methods study using qualitative data collection (structured literature search and exploratory focus groups), quantitative data transformation (sorting and hierarchical cluster analysis), and validation procedures (confirmatory focus groups and expert review). RESULTS Full-text review of 316 articles identified 91 impeding or facilitating factors affecting physicians' willingness to disclose errors. Exploratory focus groups identified an additional 27 factors. Sorting and hierarchical cluster analysis organized factors into 8 domains. Confirmatory focus groups and expert review relocated 6 factors, removed 2 factors, and modified 4 domain names. The final taxonomy contained 4 domains of facilitating factors (responsibility to patient, responsibility to self, responsibility to profession, responsibility to community), and 4 domains of impeding factors (attitudinal barriers, uncertainties, helplessness, fears and anxieties). CONCLUSIONS A taxonomy of facilitating and impeding factors provides a conceptual framework for a complex field of variables that affects physicians' willingness to disclose errors to institutions, patients, and colleagues. This taxonomy can be used to guide the design of studies to measure the impact of different factors on disclosure, to assist in the design of error-reporting systems, and to inform educational interventions to promote the disclosure of errors to patients. PMID:16918739
Reyes, Mauricio; Zysset, Philippe
2017-01-01
Osteoporosis leads to hip fractures in aging populations and is diagnosed by modern medical imaging techniques such as quantitative computed tomography (QCT). Hip fracture sites involve trabecular bone, whose strength is determined by volume fraction and orientation, known as fabric. However, bone fabric cannot be reliably assessed in clinical QCT images of proximal femur. Accordingly, we propose a novel registration-based estimation of bone fabric designed to preserve tensor properties of bone fabric and to map bone fabric by a global and local decomposition of the gradient of a non-rigid image registration transformation. Furthermore, no comprehensive analysis on the critical components of this methodology has been previously conducted. Hence, the aim of this work was to identify the best registration-based strategy to assign bone fabric to the QCT image of a patient’s proximal femur. The normalized correlation coefficient and curvature-based regularization were used for image-based registration and the Frobenius norm of the stretch tensor of the local gradient was selected to quantify the distance among the proximal femora in the population. Based on this distance, closest, farthest and mean femora with a distinction of sex were chosen as alternative atlases to evaluate their influence on bone fabric prediction. Second, we analyzed different tensor mapping schemes for bone fabric prediction: identity, rotation-only, rotation and stretch tensor. Third, we investigated the use of a population average fabric atlas. A leave one out (LOO) evaluation study was performed with a dual QCT and HR-pQCT database of 36 pairs of human femora. The quality of the fabric prediction was assessed with three metrics, the tensor norm (TN) error, the degree of anisotropy (DA) error and the angular deviation of the principal tensor direction (PTD). The closest femur atlas (CTP) with a full rotation (CR) for fabric mapping delivered the best results with a TN error of 7.3 ± 0.9%, a DA error of 6.6 ± 1.3% and a PTD error of 25 ± 2°. The closest to the population mean femur atlas (MTP) using the same mapping scheme yielded only slightly higher errors than CTP for substantially less computing efforts. The population average fabric atlas yielded substantially higher errors than the MTP with the CR mapping scheme. Accounting for sex did not bring any significant improvements. The identified fabric mapping methodology will be exploited in patient-specific QCT-based finite element analysis of the proximal femur to improve the prediction of hip fracture risk. PMID:29176881
The Nature of the Nodes, Weights and Degree of Precision in Gaussian Quadrature Rules
ERIC Educational Resources Information Center
Prentice, J. S. C.
2011-01-01
We present a comprehensive proof of the theorem that relates the weights and nodes of a Gaussian quadrature rule to its degree of precision. This level of detail is often absent in modern texts on numerical analysis. We show that the degree of precision is maximal, and that the approximation error in Gaussian quadrature is minimal, in a…
ERIC Educational Resources Information Center
Nist, Sherrie L.; And Others
Process comprehension helps students both to see why they make comprehension errors and to develop techniques to correct these problems. Keeping journals of their behavior while reading magazine articles helps students recognize relationships between their rate, comprehension, interests, and personal reading habits. Once students have developed an…
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
Article Errors in the English Writing of Saudi EFL Preparatory Year Students
ERIC Educational Resources Information Center
Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.
2017-01-01
This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…
WISC-R Examiner Errors: Cause for Concern.
ERIC Educational Resources Information Center
Slate, John R.; Chick, David
1989-01-01
Clinical psychology graduate students (N=14) administered Wechsler Intelligence Scale for Children-Revised. Found numerous scoring and mechanical errors that influenced full-scale intelligence quotient scores on two-thirds of protocols. Particularly prone to error were Verbal subtests of Vocabulary, Comprehension, and Similarities. Noted specific…
Rong, Hao; Tian, Jin
2015-05-01
The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.
1993-04-01
determining effective group functioning, leader-group interaction , and decision making; (2) factors that determine effective, low error human performance...infectious disease and biological defense vaccines and drugs , vision, neurotxins, neurochemistry, molecular neurobiology, neurodegenrative diseases...Potential Rotor/Comprehensive Analysis Model for Rotor Aerodynamics-Johnson Aeronautics (FPR/CAMRAD-JA) code to predict Blade Vortex Interaction (BVI
Evitts, Paul M; Starmer, Heather; Teets, Kristine; Montgomery, Christen; Calhoun, Lauren; Schulze, Allison; MacKenzie, Jenna; Adams, Lauren
2016-11-01
There is currently minimal information on the impact of dysphonia secondary to phonotrauma on listeners. Considering the high incidence of voice disorders with professional voice users, it is important to understand the impact of a dysphonic voice on their audiences. Ninety-one healthy listeners (39 men, 52 women; mean age = 23.62 years) were presented with speech stimuli from 5 healthy speakers and 5 speakers diagnosed with dysphonia secondary to phonotrauma. Dependent variables included processing speed (reaction time [RT] ratio), speech intelligibility, and listener comprehension. Voice quality ratings were also obtained for all speakers by 3 expert listeners. Statistical results showed significant differences between RT ratio and number of speech intelligibility errors between healthy and dysphonic voices. There was not a significant difference in listener comprehension errors. Multiple regression analyses showed that voice quality ratings from the Consensus Assessment Perceptual Evaluation of Voice (Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009) were able to predict RT ratio and speech intelligibility but not listener comprehension. Results of the study suggest that although listeners require more time to process and have more intelligibility errors when presented with speech stimuli from speakers with dysphonia secondary to phonotrauma, listener comprehension may not be affected.
Irony and Proverb Comprehension in Schizophrenia: Do Female Patients “Dislike” Ironic Remarks?
Rapp, Alexander M.; Langohr, Karin; Mutschler, Dorothee E.; Wild, Barbara
2014-01-01
Difficulties in understanding irony and sarcasm are part of the social cognition deficits in patients with schizophrenia. A number of studies have reported higher error rates during comprehension in patients with schizophrenia. However, the relationships of these impairments to schizotypal personality traits and other language deficits, such as the comprehension of proverbs, are unclear. We investigated irony and proverb comprehension in an all-female sample of 20 schizophrenia patients and 27 matched controls. Subjects indicated if a statement was intended to be ironic, literal, or meaningless and furthermore rated the meanness and funniness of the stimuli and certainty of their decision. Patients made significantly more errors than controls did. Globally, there were no overall differences in the ratings. However, patients rated the subgroup of stimuli with answers given incorrectly as having significantly less meanness and in case of an error indicated a significantly higher certainty than controls. Across all of the study participants, performances in irony (r = −0.51) and proverb (r = 0.56) comprehension were significantly correlated with schizotypal personality traits, suggesting a continuum of nonliteral language understanding. Because irony is so frequent in everyday conversations, this makes irony an especially promising candidate for social cognition training in schizophrenia. PMID:24991434
Bonrath, Esther M; Dedy, Nicolas J; Gordon, Lauren E; Grantcharov, Teodor P
2015-08-01
The aim of the study was to determine whether individualized coaching improved surgical technical skill in the operating room to a higher degree than current residency training. Clinical training in the operating room is a valuable opportunity for surgeons to acquire skill and knowledge; however, it often remains underutilized. Coaching has been successfully used in various industries to enhance performance, but its role in surgery has been insufficiently investigated. This randomized controlled trial was conducted at one surgical training program. Trainees undergoing a minimally invasive surgery rotation were randomized to either conventional training (CT) or comprehensive surgical coaching (CSC). CT included ward and operating room duties, and regular departmental teaching sessions. CSC comprised performance analysis, debriefing, feedback, and behavior modeling. Primary outcome measures were technical performance as measured on global and procedure-specific rating scales, and surgical safety parameters, measured by error count. Operative performance was assessed by blinded video analysis of the first and last cases recorded by the participants during their rotation. Twenty residents were randomized and 18 completed the study. At posttraining the CSC group (n = 9) scored significantly higher on a procedure-specific skill scale compared with the CT group (n = 9) [median, 3.90 (interquartile range, 3.68-4.30) vs 3.60 (2.98-3.70), P = 0.017], and made fewer technical errors [10 (7-13) vs 18 (13-21), P = 0.003]. Significant within-group improvements for all skill metrics were only noted in the CSC group. Comprehensive surgical coaching enhances surgical training and results in skill acquisition superior to conventional training.
Barsley, Robert E.; Bernstein, Mark L.; Brumit, Paula C.; Dorion, Robert B.J.; Golden, Gregory S.; Lewis, James M.; McDowell, John D.; Metcalf, Roger D.; Senn, David R.; Sweet, David; Weems, Richard A.
2018-01-01
Abstract Critics describe forensic dentists' management of bitemark evidence as junk science with poor sensitivity and specificity and state that linkages to a biter are unfounded. Those vocal critics, supported by certain media, characterize odontologists' previous errors as egregious and petition government agencies to render bitemark evidence inadmissible. Odontologists acknowledge that some practitioners have made past mistakes. However, it does not logically follow that the errors of a few identify a systemic failure of bitemark analysis. Scrutiny of the contentious cases shows that most occurred 20 to 40 years ago. Since then, research has been ongoing and more conservative guidelines, standards, and terminology have been adopted so that past errors are no longer reflective of current safeguards. The authors recommend a comprehensive root analysis of problem cases to be used to determine all the factors that contributed to those previous problems. The legal community also shares responsibility for some of the past erroneous convictions. Currently, most proffered bitemark cases referred to odontologists do not reach courts because those forensic dentists dismiss them as unacceptable or insufficient for analysis. Most bitemark evidence cases have been properly managed by odontologists. Bitemark evidence and testimony remain relevant and have made significant contributions in the justice system. PMID:29557817
Epidermis and Enamel: Insights Into Gnawing Criticisms of Human Bitemark Evidence.
Barsley, Robert E; Bernstein, Mark L; Brumit, Paula C; Dorion, Robert B J; Golden, Gregory S; Lewis, James M; McDowell, John D; Metcalf, Roger D; Senn, David R; Sweet, David; Weems, Richard A
2018-06-01
Critics describe forensic dentists' management of bitemark evidence as junk science with poor sensitivity and specificity and state that linkages to a biter are unfounded. Those vocal critics, supported by certain media, characterize odontologists' previous errors as egregious and petition government agencies to render bitemark evidence inadmissible. Odontologists acknowledge that some practitioners have made past mistakes. However, it does not logically follow that the errors of a few identify a systemic failure of bitemark analysis. Scrutiny of the contentious cases shows that most occurred 20 to 40 years ago. Since then, research has been ongoing and more conservative guidelines, standards, and terminology have been adopted so that past errors are no longer reflective of current safeguards. The authors recommend a comprehensive root analysis of problem cases to be used to determine all the factors that contributed to those previous problems. The legal community also shares responsibility for some of the past erroneous convictions. Currently, most proffered bitemark cases referred to odontologists do not reach courts because those forensic dentists dismiss them as unacceptable or insufficient for analysis. Most bitemark evidence cases have been properly managed by odontologists. Bitemark evidence and testimony remain relevant and have made significant contributions in the justice system.
New analysis strategies for micro aspheric lens metrology
NASA Astrophysics Data System (ADS)
Gugsa, Solomon Abebe
Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
First-year Analysis of the Operating Room Black Box Study.
Jung, James J; Jüni, Peter; Lebovic, Gerald; Grantcharov, Teodor
2018-06-18
To characterize intraoperative errors, events, and distractions, and measure technical skills of surgeons in minimally invasive surgery practice. Adverse events in the operating room (OR) are common contributors of morbidity and mortality in surgical patients. Adverse events often occur due to deviations in performance and environmental factors. Although comprehensive intraoperative data analysis and transparent disclosure have been advocated to better understand how to improve surgical safety, they have rarely been done. We conducted a prospective cohort study in 132 consecutive patients undergoing elective laparoscopic general surgery at an academic hospital during the first year after the definite implementation of a multiport data capture system called the OR Black Box to identify intraoperative errors, events, and distractions. Expert analysts characterized intraoperative distractions, errors, and events, and measured trainee involvement as main operator. Technical skills were compared, crude and risk-adjusted, among the attending surgeon and trainees. Auditory distractions occurred a median of 138 times per case [interquartile range (IQR) 96-190]. At least 1 cognitive distraction appeared in 84 cases (64%). Medians of 20 errors (IQR 14-36) and 8 events (IQR 4-12) were identified per case. Both errors and events occurred often in dissection and reconstruction phases of operation. Technical skills of residents were lower than those of the attending surgeon (P = 0.015). During elective laparoscopic operations, frequent intraoperative errors and events, variation in surgeons' technical skills, and a high amount of environmental distractions were identified using the OR Black Box.
Benavides-Varela, S; Piva, D; Burgio, F; Passarini, L; Rolma, G; Meneghello, F; Semenza, C
2017-03-01
Arithmetical deficits in right-hemisphere damaged patients have been traditionally considered secondary to visuo-spatial impairments, although the exact relationship between the two deficits has rarely been assessed. The present study implemented a voxelwise lesion analysis among 30 right-hemisphere damaged patients and a controlled, matched-sample, cross-sectional analysis with 35 cognitively normal controls regressing three composite cognitive measures on standardized numerical measures. The results showed that patients and controls significantly differ in Number comprehension, Transcoding, and Written operations, particularly subtractions and multiplications. The percentage of patients performing below the cutoffs ranged between 27% and 47% across these tasks. Spatial errors were associated with extensive lesions in fronto-temporo-parietal regions -which frequently lead to neglect- whereas pure arithmetical errors appeared related to more confined lesions in the right angular gyrus and its proximity. Stepwise regression models consistently revealed that spatial errors were primarily predicted by composite measures of visuo-spatial attention/neglect and representational abilities. Conversely, specific errors of arithmetic nature linked to representational abilities only. Crucially, the proportion of arithmetical errors (ranging from 65% to 100% across tasks) was higher than that of spatial ones. These findings thus suggest that unilateral right hemisphere lesions can directly affect core numerical/arithmetical processes, and that right-hemisphere acalculia is not only ascribable to visuo-spatial deficits as traditionally thought. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.
2012-01-01
There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.
Space shuttle entry and landing navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Crawford, B. S.
1974-01-01
A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.
Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B
2017-08-01
Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Differences between conduction aphasia and Wernicke's aphasia.
Anzaki, F; Izumi, S
2001-07-01
Conduction aphasia and Wernike's aphasia have been differentiated by the degree of auditory language comprehension. We quantitatively compared the speech sound errors of two conduction aphasia patients and three Wernicke's aphasia patients on various language modality tests. All of the patients were Japanese. The two conduction aphasia patients had "conduites d'approche" errors and phonological paraphasia. The patient with mild Wernicke's aphasia made various errors. In the patient with severe Wernicke's aphasia, neologism was observed. Phonological paraphasia in the two conduction aphasia patients seemed to occur when the examinee searched for the target word. They made more errors in vowels than in consonants of target words on the naming and repetition tests. They seemed to search the target word by the correct consonant phoneme and incorrect vocalic phoneme in the table of the Japanese alphabet. The Wernicke's aphasia patients who had severe impairment of auditory comprehension, made more errors in consonants than in vowels of target words. In conclusion, utterance of conduction aphasia and that of Wernicke's aphasia are qualitatively distinct.
Improving the quality of marine geophysical track line data: Along-track analysis
NASA Astrophysics Data System (ADS)
Chandler, Michael T.; Wessel, Paul
2008-02-01
We have examined 4918 track line geophysics cruises archived at the U.S. National Geophysical Data Center (NGDC) using comprehensive error checking methods. Each cruise was checked for observation outliers, excessive gradients, metadata consistency, and general agreement with satellite altimetry-derived gravity and predicted bathymetry grids. Thresholds for error checking were determined empirically through inspection of histograms for all geophysical values, gradients, and differences with gridded data sampled along ship tracks. Robust regression was used to detect systematic scale and offset errors found by comparing ship bathymetry and free-air anomalies to the corresponding values from global grids. We found many recurring error types in the NGDC archive, including poor navigation, inappropriately scaled or offset data, excessive gradients, and extended offsets in depth and gravity when compared to global grids. While ˜5-10% of bathymetry and free-air gravity records fail our conservative tests, residual magnetic errors may exceed twice this proportion. These errors hinder the effective use of the data and may lead to mistakes in interpretation. To enable the removal of gross errors without over-writing original cruise data, we developed an errata system that concisely reports all errors encountered in a cruise. With such errata files, scientists may share cruise corrections, thereby preventing redundant processing. We have implemented these quality control methods in the modified MGD77 supplement to the Generic Mapping Tools software suite.
Gallego, Carlos; Martín-Aragoneses, M Teresa; López-Higes, Ramón; Pisón, Guzmán
2016-01-01
Deaf students have traditionally exhibited reading comprehension difficulties. In recent years, these comprehension problems have been partially offset through cochlear implantation (CI), and the subsequent improvement in spoken language skills. However, the use of cochlear implants has not managed to fully bridge the gap in language and reading between normally hearing (NH) and deaf children, as its efficacy depends on variables such as the age at implant. This study compared the reading comprehension of sentences in 19 children who received a cochlear implant before 24 months of age (early-CI) and 19 who received it after 24 months (late-CI) with a control group of 19 NH children. The task involved completing sentences in which the last word had been omitted. To complete each sentence children had to choose a word from among several alternatives that included one syntactic and two semantic foils in addition to the target word. The results showed that deaf children with late-CI performed this task significantly worse than NH children, while those with early-CI exhibited no significant differences with NH children, except under more demanding processing conditions (long sentences with infrequent target words). Further, the error analysis revealed a preference of deaf students with early-CI for selecting the syntactic foil over a semantic one, which suggests that they draw upon syntactic cues during sentence processing in the same way as NH children do. In contrast, deaf children with late-CI do not appear to use a syntactic strategy, but neither a semantic strategy based on the use of key words, as the literature suggests. Rather, the numerous errors of both kinds that the late-CI group made seem to indicate an inconsistent and erratic response when faced with a lack of comprehension. These findings are discussed in relation to differences in receptive vocabulary and short-term memory and their implications for sentence reading comprehension. Copyright © 2015 Elsevier Ltd. All rights reserved.
Statistical analysis of the determinations of the Sun's Galactocentric distance
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
Is There a Lexical Bias Effect in Comprehension Monitoring?
ERIC Educational Resources Information Center
Severens, Els; Hartsuiker, Robert J.
2009-01-01
Event-related potentials were used to investigate if there is a lexical bias effect in comprehension monitoring. The lexical bias effect in language production (the tendency of phonological errors to result in existing words rather than nonwords) has been attributed to an internal self-monitoring system, which uses the comprehension system, and…
Antidepressant and antipsychotic medication errors reported to United States poison control centers.
Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A
2018-05-08
To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.
Implementation of Concept Mapping to Novices: Reasons for Errors, a Matter of Technique or Content?
ERIC Educational Resources Information Center
Conradty, Catherine; Bogner, Franz X.
2010-01-01
Concept mapping is discussed as a means to promote meaningful learning and in particular progress in reading comprehension skills. Its increasing implementation necessitates the acquisition of adequate knowledge about frequent errors in order to make available an effective introduction to the new learning method. To analyse causes of errors, 283…
ERIC Educational Resources Information Center
Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret
2016-01-01
The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…
Reduced backscattering cross section (Sigma degree) data from the Skylab S-193 radar altimeter
NASA Technical Reports Server (NTRS)
Brown, G. S.
1975-01-01
Backscattering cross section per unit scattering area data, reduced from measurements made by the Skylab S-193 radar altimeter over the ocean surface are presented. Descriptions of the altimeter are given where applicable to the measurement process. Analytical solutions are obtained for the flat surface impulse response for the case of a nonsymmetrical antenna pattern. Formulations are developed for converting altimeter AGC outputs into values for the backscattering cross section. Reduced data are presented for Missions SL-2, 3 and 4 for all modes of the altimeter where sufficient calibration existed. The problem of interpreting land scatter data is also discussed. Finally, a comprehensive error analysis of the measurement is presented and worst case random and bias errors are estimated.
Kermani, Bahram G
2016-07-01
Crystal Genetics, Inc. is an early-stage genetic test company, focused on achieving the highest possible clinical-grade accuracy and comprehensiveness for detecting germline (e.g., in hereditary cancer) and somatic (e.g., in early cancer detection) mutations. Crystal's mission is to significantly improve the health status of the population, by providing high accuracy, comprehensive, flexible and affordable genetic tests, primarily in cancer. Crystal's philosophy is that when it comes to detecting mutations that are strongly correlated with life-threatening diseases, the detection accuracy of every single mutation counts: a single false-positive error could cause severe anxiety for the patient. And, more importantly, a single false-negative error could potentially cost the patient's life. Crystal's objective is to eliminate both of these error types.
Medication errors: problems and recommendations from a consensus meeting
Agrawal, Abha; Aronson, Jeffrey K; Britten, Nicky; Ferner, Robin E; de Smet, Peter A; Fialová, Daniela; Fitzgerald, Richard J; Likić, Robert; Maxwell, Simon R; Meyboom, Ronald H; Minuz, Pietro; Onder, Graziano; Schachter, Michael; Velo, Giampaolo
2009-01-01
Here we discuss 15 recommendations for reducing the risks of medication errors: Provision of sufficient undergraduate learning opportunities to make medical students safe prescribers. Provision of opportunities for students to practise skills that help to reduce errors. Education of students about common types of medication errors and how to avoid them. Education of prescribers in taking accurate drug histories. Assessment in medical schools of prescribing knowledge and skills and demonstration that newly qualified doctors are safe prescribers. European harmonization of prescribing and safety recommendations and regulatory measures, with regular feedback about rational drug use. Comprehensive assessment of elderly patients for declining function. Exploration of low-dose regimens for elderly patients and preparation of special formulations as required. Training for all health-care professionals in drug use, adverse effects, and medication errors in elderly people. More involvement of pharmacists in clinical practice. Introduction of integrated prescription forms and national implementation in individual countries. Development of better monitoring systems for detecting medication errors, based on classification and analysis of spontaneous reports of previous reactions, and for investigating the possible role of medication errors when patients die. Use of IT systems, when available, to provide methods of avoiding medication errors; standardization, proper evaluation, and certification of clinical information systems. Nonjudgmental communication with patients about their concerns and elicitation of symptoms that they perceive to be adverse drug reactions. Avoidance of defensive reactions if patients mention symptoms resulting from medication errors. PMID:19594525
Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope
NASA Astrophysics Data System (ADS)
Zheng, Yue; Zhang, Chunxi; Li, Lijing
2018-03-01
The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.
How accurate are lexile text measures?
Stenner, A Jackson; Burdick, Hal; Sanford, Eleanor E; Burdick, Donald S
2006-01-01
The Lexile Framework for Reading models comprehension as the difference between a reader measure and a text measure. Uncertainty in comprehension rates results from unreliability in reader measures and inaccuracy in text readability measures. Whole-text processing eliminates sampling error in text measures. However, Lexile text measures are imperfect due to misspecification of the Lexile theory. The standard deviation component associated with theory misspecification is estimated at 64L for a standard-length passage (approximately 125 words). A consequence is that standard errors for longer texts (2,500 to 150,000 words) are measured on the Lexile scale with uncertainties in the single digits. Uncertainties in expected comprehension rates are largely due to imprecision in reader ability and not inaccuracies in text readabilities.
Panunzio, Michele F.; Antoniciello, Antonietta; Pisano, Alessandra; Rosa, Giovanna
2007-01-01
With respect to food safety, many works have studied the effectiveness of self-monitoring plans of food companies, designed using the Hazard Analysis and Critical Control Point (HACCP) method. On the other hand, in-depth research has not been made concerning the adherence of the plans to HACCP standards. During our research, we evaluated 116 self-monitoring plans adopted by food companies located in the territory of the Local Health Authority (LHA) of Foggia, Italy. The general errors (terminology, philosophy and redundancy) and the specific errors (transversal plan, critical limits, hazard specificity, and lack of procedures) were standardized. Concerning the general errors, terminological errors pertain to half the plans examined, 47% include superfluous elements and 60% have repetitive subjects. With regards to the specific errors, 77% of the plans examined contained specific errors. The evaluation has pointed out the lack of comprehension of the HACCP system by the food companies and has allowed the Servizio di Igiene degli Alimenti e della Nutrizione (Food and Nutrition Health Service), in its capacity as a control body, to intervene with the companies in order to improve designing HACCP plans. PMID:17911662
Research Prototype: Automated Analysis of Scientific and Engineering Semantics
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Follen, Greg (Technical Monitor)
2001-01-01
Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.
Evaluation of the 3dMDface system as a tool for soft tissue analysis.
Hong, C; Choi, K; Kachroo, Y; Kwon, T; Nguyen, A; McComb, R; Moon, W
2017-06-01
To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Ari, Omer
2009-01-01
Fluency instruction has had limited effects on reading comprehension relative to reading rate and prosodic reading (Dowhower, 1987; Herman, 1985; National Institute of Child Health and Human Development, 2000a). More specific components (i.e., error detection) of comprehension may yield larger effects through exposure to a wider range of materials…
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Proposed military handbook for dynamic data acquisition and analysis - An invitation to review
NASA Technical Reports Server (NTRS)
Himelblau, Harry; Wise, James H.; Piersol, Allan G.; Grundvig, Max R.
1990-01-01
A draft Military Handbook prepared under the sponsorship of the USAF Space Division is presently being distributed throughout the U.S. for review by the aerospace community. This comprehensive document provides recommended guidelines for the acquisition and analysis of structural dynamics and aeroacoustic data, and is intended to reduce the errors and variability commonly found in flight, ground and laboratory dynamic test measurements. In addition to the usual variety of measurement problems encountered in the definition of dynamic loads, the development of design and test criteria, and the analysis of failures, special emphasis is given to certain state-of-the-art topics, such as pyroshock data acquisition and nonstationary random data analysis.
Population viability analysis with species occurrence data from museum collections.
Skarpaas, Olav; Stabbetorp, Odd E
2011-06-01
The most comprehensive data on many species come from scientific collections. Thus, we developed a method of population viability analysis (PVA) in which this type of occurrence data can be used. In contrast to classical PVA, our approach accounts for the inherent observation error in occurrence data and allows the estimation of the population parameters needed for viability analysis. We tested the sensitivity of the approach to spatial resolution of the data, length of the time series, sampling effort, and detection probability with simulated data and conducted PVAs for common, rare, and threatened species. We compared the results of these PVAs with results of standard method PVAs in which observation error is ignored. Our method provided realistic estimates of population growth terms and quasi-extinction risk in cases in which the standard method without observation error could not. For low values of any of the sampling variables we tested, precision decreased, and in some cases biased estimates resulted. The results of our PVAs with the example species were consistent with information in the literature on these species. Our approach may facilitate PVA for a wide range of species of conservation concern for which demographic data are lacking but occurrence data are readily available. ©2011 Society for Conservation Biology.
Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.
Robinson, P J
1997-11-01
The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery
Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Filipino, Indonesian and Thai Listening Test Errors
ERIC Educational Resources Information Center
Castro, C. S.; And Others
1975-01-01
This article reports on a study to identify listening, and aural comprehension difficulties experienced by students of English, specifically RELC (Regional English Language Centre in Singapore) course members. The most critical errors are discussed and conclusions about foreign language learning are drawn. (CLK)
Dimensional synthesis of a 3-DOF parallel manipulator with full circle rotation
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Wu, Nan; Zhong, Xueyong; Zhang, Biao
2015-07-01
Parallel robots are widely used in the academic and industrial fields. In spite of the numerous achievements in the design and dimensional synthesis of the low-mobility parallel robots, few research efforts are directed towards the asymmetric 3-DOF parallel robots whose end-effector can realize 2 translational and 1 rotational(2T1R) motion. In order to develop a manipulator with the capability of full circle rotation to enlarge the workspace, a new 2T1R parallel mechanism is proposed. The modeling approach and kinematic analysis of this proposed mechanism are investigated. Using the method of vector analysis, the inverse kinematic equations are established. This is followed by a vigorous proof that this mechanism attains an annular workspace through its circular rotation and 2 dimensional translations. Taking the first order perturbation of the kinematic equations, the error Jacobian matrix which represents the mapping relationship between the error sources of geometric parameters and the end-effector position errors is derived. With consideration of the constraint conditions of pressure angles and feasible workspace, the dimensional synthesis is conducted with a goal to minimize the global comprehensive performance index. The dimension parameters making the mechanism to have optimal error mapping and kinematic performance are obtained through the optimization algorithm. All these research achievements lay the foundation for the prototype building of such kind of parallel robots.
Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)
NASA Technical Reports Server (NTRS)
Geist, J.
1972-01-01
A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.
Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A
2008-12-01
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.
Exploring Situational Awareness in Diagnostic Errors in Primary Care
Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.
2013-01-01
Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757
Patient safety education at Japanese medical schools: results of a nationwide survey.
Maeda, Shoichi; Kamishiraki, Etsuko; Starkey, Jay
2012-05-10
Patient safety education, including error prevention strategies and management of adverse events, has become a topic of worldwide concern. The importance of the patient safety is also recognized in Japan following two serious medical accidents in 1999. Furthermore, educational curriculum guideline revisions in 2008 by relevant the Ministry of Education includes patient safety as part of the core medical curriculum. However, little is known about the patient safety education in Japanese medical schools partly because a comprehensive study has not yet been conducted in this field. Therefore, we have conducted a nationwide survey in order to clarify the current status of patient safety education at medical schools in Japan. Response rate was 60.0% (n = 48/80). Ninety-eight-percent of respondents (n = 47/48) reported integration of patient safety education into their curricula. Thirty-nine percent reported devoting less than five hours to the topic. All schools that teach patient safety reported use of lecture based teaching methods while few used alternative methods, such as role-playing or in-hospital training. Topics related to medical error theory and legal ramifications of error are widely taught while practical topics related to error analysis such as root cause analysis are less often covered. Based on responses to our survey, most Japanese medical schools have incorporated the topic of patient safety into their curricula. However, the number of hours devoted to the patient safety education is far from the sufficient level with forty percent of medical schools that devote five hours or less to it. In addition, most medical schools employ only the lecture based learning, lacking diversity in teaching methods. Although most medical schools cover basic error theory, error analysis is taught at fewer schools. We still need to make improvements to our medical safety curricula. We believe that this study has the implications for the rest of the world as a model of what is possible and a sounding board for what topics might be important.
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less
Application of parameter estimation to aircraft stability and control: The output-error approach
NASA Technical Reports Server (NTRS)
Maine, Richard E.; Iliff, Kenneth W.
1986-01-01
The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.
Ross, Elliott D; Monnot, Marilee
2011-04-01
The Aprosodia Battery was developed to distinguish different patterns of affective-prosodic deficits in patients with left versus right brain damage by using affective utterances with incrementally reduced verbal-articulatory demands. It has also been used to assess affective-prosodic performance in various clinical groups, including patients with schizophrenia, PTSD, multiple sclerosis, alcohol abuse and Alzheimer disease and in healthy adults, as means to explore maturational-aging effects. To date, all studies using the Aprosodia Battery have yielded statistically robust results. This paper describes an extensive, quantitative error analysis using previous results from the Aprosodia Battery in patients with left and right brain damage, age-equivalent controls (old adults), and a group of young adults. This inductive analysis was performed to address three major issues in the literature: (1) sex and (2) maturational-aging effects in comprehending affective prosody and (3) differential hemispheric lateralization of emotions. We found no overall sex effects for comprehension of affective prosody. There were, however, scattered sex effects related to a particular affect, suggesting that these differences were related to cognitive appraisal rather than primary perception. Results in the brain damaged groups did not support the Valence Hypothesis of emotional lateralization but did support the Right Hemisphere Hypothesis of emotional lateralization. When comparing young versus old adults, a robust maturational-aging effect was observed in overall error rates and in the distribution of errors across affects. This effect appears to be mediated, in part, by cognitive appraisal, causing an alteration in the salience of different affective-prosodic stimuli with increasing age. In addition, the maturational-aging effects lend support for the Emotion-Type hypothesis of emotional lateralization and the "classic aging effect" that is due primarily to decline of right hemisphere cognitive functions in senescence. The results of our inductive analysis may help direct future deductive research efforts, exploring the neuropsychology of emotional communication, by taking into account the potentially confounding influence of (1) methodological differences involving construction of test stimuli and assessment procedures, (2) developmental, maturational and aging effects related to cognitive appraisal and (3) whether a stimulus has a primary or social-emotional bias. Published by Elsevier Ltd.
Shohaimi, Shamarina; Wei, Wong Yoke; Shariff, Zalilah Mohd
2014-01-01
Comprehensive feeding practices questionnaire (CFPQ) is an instrument specifically developed to evaluate parental feeding practices. It has been confirmed among children in America and applied to populations in France, Norway, and New Zealand. In order to extend the application of CFPQ, we conducted a factor structure validation of the translated version of CFPQ (CFPQ-M) using confirmatory factor analysis among mothers of primary school children (N = 397) in Malaysia. Several items were modified for cultural adaptation. Of 49 items, 39 items with loading factors >0.40 were retained in the final model. The confirmatory factor analysis revealed that the final model (twelve-factor model with 39 items and 2 error covariances) displayed the best fit for our sample (Chi-square = 1147; df = 634; P < 0.05; CFI = 0.900; RMSEA = 0.045; SRMR = 0.0058). The instrument with some modifications was confirmed among mothers of school children in Malaysia. The present study extends the usability of the CFPQ and enables researchers and parents to better understand the relationships between parental feeding practices and related problems such as childhood obesity.
Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B
2015-09-18
There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mendez, M F
2001-02-01
After a right temporoparietal stroke, a left-handed man lost the ability to understand speech and environmental sounds but developed greater appreciation for music. The patient had preserved reading and writing but poor verbal comprehension. Slower speech, single syllable words, and minimal written cues greatly facilitated his verbal comprehension. On identifying environmental sounds, he made predominant acoustic errors. Although he failed to name melodies, he could match, describe, and sing them. The patient had normal hearing except for presbyacusis, right-ear dominance for phonemes, and normal discrimination of basic psychoacoustic features and rhythm. Further testing disclosed difficulty distinguishing tone sequences and discriminating two clicks and short-versus-long tones, particularly in the left ear. Together, these findings suggest impairment in a direct route for temporal analysis and auditory word forms in his right hemisphere to Wernicke's area in his left hemisphere. The findings further suggest a separate and possibly rhythm-based mechanism for music recognition.
Analysis of a Stabilized CNLF Method with Fast Slow Wave Splittings for Flow Problems
Jiang, Nan; Tran, Hoang A.
2015-04-01
In this work, we study Crank-Nicolson leap-frog (CNLF) methods with fast-slow wave splittings for Navier-Stokes equations (NSE) with a rotation/Coriolis force term, which is a simplification of geophysical flows. We propose a new stabilized CNLF method where the added stabilization completely removes the method's CFL time step condition. A comprehensive stability and error analysis is given. We also prove that for Oseen equations with the rotation term, the unstable mode (for which u(n+1) + u(n-1) equivalent to 0) of CNLF is asymptotically stable. Numerical results are provided to verify the stability and the convergence of the methods.
ERIC Educational Resources Information Center
Roussel, F.
Progress in the teaching of oral comprehension depends partly on the isolation of factors which block comprehension. Research in conjunction with an experimental course in English as a second language led to the definition of errors due to: (1) insufficient knowledge of the language and the cultural context of its use, and (2) a failure to…
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Fabbretti, G
2010-06-01
Because of its complex nature, surgical pathology practice is prone to error. In this report, we describe our methods for reducing error as much as possible during the pre-analytical and analytical phases. This was achieved by revising procedures, and by using computer technology and automation. Most mistakes are the result of human error in the identification and matching of patient and samples. To avoid faulty data interpretation, we employed a new comprehensive computer system that acquires all patient ID information directly from the hospital's database with a remote order entry; it also provides label and request forms via-Web where clinical information is required before sending the sample. Both patient and sample are identified directly and immediately at the site where the surgical procedures are performed. Barcode technology is used to input information at every step and automation is used for sample blocks and slides to avoid errors that occur when information is recorded or transferred by hand. Quality control checks occur at every step of the process to ensure that none of the steps are left to chance and that no phase is dependent on a single operator. The system also provides statistical analysis of errors so that new strategies can be implemented to avoid repetition. In addition, the staff receives frequent training on avoiding errors and new developments. The results have been shown promising results with a very low error rate (0.27%). None of these compromised patient health and all errors were detected before the release of the diagnosis report.
Nogueira, Débora Manzano; Cárnio, Maria Silvia
2018-01-01
Purpose Prepare a Speech-language Pathology Program for Reading Comprehension and Orthography and verify its effects on the reading comprehension and spelling of students with Developmental Dyslexia. Methods The study sample was composed of eleven individuals (eight males), diagnosed with Developmental Dyslexia, aged 09-11 years. All participants underwent a Speech-language Pathology Program in Reading Comprehension and Orthography comprising 16 individual weekly sessions. In each session, tasks of reading comprehension of texts and orthography were developed. At the beginning and end of the Program, the participants were submitted to a specific assessment (pre- and post-test). Results The individuals presented difficulty in reading comprehension, but the Cloze technique proved to be a useful remediation tool, and significant improvement in their performance was observed in the post-test evaluation. The dyslexic individuals showed poor performance for their educational level in the spelling assessment. At the end of the program, their performance evolved, but it remained below the expected, showing the same error pattern at the pre- and post-tests, with errors in both natural and arbitrary spelling. Conclusion The proposed Speech-language Pathology Program for Reading Comprehension and Orthography produced positive effects on the reading comprehension, spelling, and motivation to reading and writing of the participants. This study presents an unprecedented contribution by proposing joint stimulation of reading and writing by means of a program easy to apply and analyze in individuals with Developmental Dyslexia.
A method on error analysis for large-aperture optical telescope control system
NASA Astrophysics Data System (ADS)
Su, Yanrui; Wang, Qiang; Yan, Fabao; Liu, Xiang; Huang, Yongmei
2016-10-01
For large-aperture optical telescope, compared with the performance of azimuth in the control system, arc second-level jitters exist in elevation under different speeds' working mode, especially low-speed working mode in the process of its acquisition, tracking and pointing. The jitters are closely related to the working speed of the elevation, resulting in the reduction of accuracy and low-speed stability of the telescope. By collecting a large number of measured data to the elevation, we do analysis on jitters in the time domain, frequency domain and space domain respectively. And the relation between jitter points and the leading speed of elevation and the corresponding space angle is concluded that the jitters perform as periodic disturbance in space domain and the period of the corresponding space angle of the jitter points is 79.1″ approximately. Then we did simulation, analysis and comparison to the influence of the disturbance sources, like PWM power level output disturbance, torque (acceleration) disturbance, speed feedback disturbance and position feedback disturbance on the elevation to find that the space periodic disturbance still exist in the elevation performance. It leads us to infer that the problems maybe exist in angle measurement unit. The telescope employs a 24-bit photoelectric encoder and we can calculate the encoder grating angular resolution as 79.1016'', which is as the corresponding angle value in the whole encoder system of one period of the subdivision signal. The value is approximately equal to the space frequency of the jitters. Therefore, the working elevation of the telescope is affected by subdivision errors and the period of the subdivision error is identical to the period of encoder grating angular. Through comprehensive consideration and mathematical analysis, that DC subdivision error of subdivision error sources causes the jitters is determined, which is verified in the practical engineering. The method that analyze error sources from time domain, frequency domain and space domain respectively has a very good role in guiding to find disturbance sources for large-aperture optical telescope.
Prediction, Error, and Adaptation during Online Sentence Comprehension
ERIC Educational Resources Information Center
Fine, Alex Brabham
2013-01-01
A fundamental challenge for human cognition is perceiving and acting in a world in which the statistics that characterize available sensory data are non-stationary. This thesis focuses on this problem specifically in the domain of sentence comprehension, where linguistic variability poses computational challenges to the processes underlying…
Uncovering the requirements of cognitive work.
Roth, Emilie M
2008-06-01
In this article, the author provides an overview of cognitive analysis methods and how they can be used to inform system analysis and design. Human factors has seen a shift toward modeling and support of cognitively intensive work (e.g., military command and control, medical planning and decision making, supervisory control of automated systems). Cognitive task analysis and cognitive work analysis methods extend traditional task analysis techniques to uncover the knowledge and thought processes that underlie performance in cognitively complex settings. The author reviews the multidisciplinary roots of cognitive analysis and the variety of cognitive task analysis and cognitive work analysis methods that have emerged. Cognitive analysis methods have been used successfully to guide system design, as well as development of function allocation, team structure, and training, so as to enhance performance and reduce the potential for error. A comprehensive characterization of cognitive work requires two mutually informing analyses: (a) examination of domain characteristics and constraints that define cognitive requirements and challenges and (b) examination of practitioner knowledge and strategies that underlie both expert and error-vulnerable performance. A variety of specific methods can be adapted to achieve these aims within the pragmatic constraints of particular projects. Cognitive analysis methods can be used effectively to anticipate cognitive performance problems and specify ways to improve individual and team cognitive performance (be it through new forms of training, user interfaces, or decision aids).
MEADERS: Medication Errors and Adverse Drug Event Reporting system.
Zafar, Atif
2007-10-11
The Agency for Healthcare Research and Quality (AHRQ) recently funded the PBRN Resource Center to develop a system for reporting ambulatory medication errors. Our goal was to develop a usable system that practices could use internally to track errors. We initially performed a comprehensive literature review of what is currently available. Then, using a combination of expert panel meetings and iterative development we designed an instrument for ambulatory medication error reporting and createad a reporting system based both in MS Access 2003 and on the web using MS ASP.NET 2.0 technologies.
Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke
2011-08-01
TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.
Dynamic safety assessment of natural gas stations using Bayesian network.
Zarei, Esmaeil; Azadeh, Ali; Khakzad, Nima; Aliabadi, Mostafa Mirzaei; Mohammadfam, Iraj
2017-01-05
Pipelines are one of the most popular and effective ways of transporting hazardous materials, especially natural gas. However, the rapid development of gas pipelines and stations in urban areas has introduced a serious threat to public safety and assets. Although different methods have been developed for risk analysis of gas transportation systems, a comprehensive methodology for risk analysis is still lacking, especially in natural gas stations. The present work is aimed at developing a dynamic and comprehensive quantitative risk analysis (DCQRA) approach for accident scenario and risk modeling of natural gas stations. In this approach, a FMEA is used for hazard analysis while a Bow-tie diagram and Bayesian network are employed to model the worst-case accident scenario and to assess the risks. The results have indicated that the failure of the regulator system was the worst-case accident scenario with the human error as the most contributing factor. Thus, in risk management plan of natural gas stations, priority should be given to the most probable root events and main contribution factors, which have identified in the present study, in order to reduce the occurrence probability of the accident scenarios and thus alleviate the risks. Copyright © 2016 Elsevier B.V. All rights reserved.
Analyzing average and conditional effects with multigroup multilevel structural equation models
Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf
2014-01-01
Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668
RCT: Module 2.03, Counting Errors and Statistics, Course 8768
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillmer, Kurt T.
2017-04-01
Radiological sample analysis involves the observation of a random process that may or may not occur and an estimation of the amount of radioactive material present based on that observation. Across the country, radiological control personnel are using the activity measurements to make decisions that may affect the health and safety of workers at those facilities and their surrounding environments. This course will present an overview of measurement processes, a statistical evaluation of both measurements and equipment performance, and some actions to take to minimize the sources of error in count room operations. This course will prepare the student withmore » the skills necessary for radiological control technician (RCT) qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and by providing in the field skills.« less
Analyzing thematic maps and mapping for accuracy
Rosenfield, G.H.
1982-01-01
Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by either the row totals or the column totals from the original classification error matrices. In hypothesis testing, when the results of tests of multiple sample cases prove to be significant, some form of statistical test must be used to separate any results that differ significantly from the others. In the past, many analyses of the data in this error matrix were made by comparing the relative magnitudes of the percentage of correct classifications, for either individual categories, the entire map or both. More rigorous analyses have used data transformations and (or) two-way classification analysis of variance. A more sophisticated step of data analysis techniques would be to use the entire classification error matrices using the methods of discrete multivariate analysis or of multiviariate analysis of variance.
Quasi-eccentricity error modeling and compensation in vision metrology
NASA Astrophysics Data System (ADS)
Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin
2018-04-01
Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.
Partial polarization: a comprehensive student exercise
NASA Astrophysics Data System (ADS)
Topasna, Gregory A.; Topasna, Daniela M.
2015-10-01
We present a comprehensive student exercise in partial polarization. Students are first introduced to the concept of partial polarization using Fresnel Equations. Next, MATHCAD is used to compute and graph the reflectance for dielectrics materials. The students then design and construct a simple, easy to use collimated light source for their experiment, which is performed on an optical breadboard using optical components typically found in an optics lab above the introductory level. The students obtain reflection data that is compared with their model by a nonlinear least square fit using EXCEL. Sources of error and uncertainty are discussed and students present a final written report. In this one exercise students learn how an experiment is constructed "from the ground up". They gain practical experience on data modeling and analysis, working with optical equipment, machining and construction, and preparing a final presentation.
Modeling and characterization of multipath in global navigation satellite system ranging signals
NASA Astrophysics Data System (ADS)
Weiss, Jan Peter
The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
Niemeyer, Kyle E.; Sung, Chih-Jen
2014-11-01
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
Addressing Misconceptions in Geometry through Written Error Analyses
ERIC Educational Resources Information Center
Kembitzky, Kimberle A.
2009-01-01
This study examined the improvement of students' comprehension of geometric concepts through analytical writing about their own misconceptions using a reflective tool called an ERNIe (acronym for ERror aNalyIsis). The purpose of this study was to determine whether the ERNIe process could be used to correct geometric misconceptions, as well as how…
Distortions in memory for visual displays
NASA Technical Reports Server (NTRS)
Tversky, Barbara
1989-01-01
Systematic errors in perception and memory present a challenge to theories of perception and memory and to applied psychologists interested in overcoming them as well. A number of systematic errors in memory for maps and graphs are reviewed, and they are accounted for by an analysis of the perceptual processing presumed to occur in comprehension of maps and graphs. Visual stimuli, like verbal stimuli, are organized in comprehension and memory. For visual stimuli, the organization is a consequence of perceptual processing, which is bottom-up or data-driven in its earlier stages, but top-down and affected by conceptual knowledge later on. Segregation of figure from ground is an early process, and figure recognition later; for both, symmetry is a rapidly detected and ecologically valid cue. Once isolated, figures are organized relative to one another and relative to a frame of reference. Both perceptual (e.g., salience) and conceptual factors (e.g., significance) seem likely to affect selection of a reference frame. Consistent with the analysis, subjects perceived and remembered curves in graphs and rivers in maps as more symmetric than they actually were. Symmetry, useful for detecting and recognizing figures, distorts map and graph figures alike. Top-down processes also seem to operate in that calling attention to the symmetry vs. asymmetry of a slightly asymmetric curve yielded memory errors in the direction of the description. Conceptual frame of reference effects were demonstrated in memory for lines embedded in graphs. In earlier work, the orientation of map figures was distorted in memory toward horizontal or vertical. In recent work, graph lines, but not map lines, were remembered as closer to an imaginary 45 deg line than they had been. Reference frames are determined by both perceptual and conceptual factors, leading to selection of the canonical axes as a reference frame in maps, but selection of the imaginary 45 deg as a reference frame in graphs.
Quantifying uncertainty in carbon and nutrient pools of coarse woody debris
NASA Astrophysics Data System (ADS)
See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.
2016-12-01
Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-01-01
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era. PMID:27649151
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-09-14
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era.
ERIC Educational Resources Information Center
Savaiano, Mackenzie E.; Hatton, Deborah D.
2013-01-01
Introduction: This study evaluated whether children with visual impairments who receive repeated reading instruction exhibit an increase in their oral reading rate and comprehension and a decrease in oral reading error rates. Methods: A single-subject, changing-criterion design replicated across three participants was used to demonstrate the…
Metacognition and proofreading: the roles of aging, motivation, and interest.
Hargis, Mary B; Yue, Carole L; Kerr, Tyson; Ikeda, Kenji; Murayama, Kou; Castel, Alan D
2017-03-01
The current study examined younger and older adults' error detection accuracy, prediction calibration, and postdiction calibration on a proofreading task, to determine if age-related differences would be present in this type of common error detection task. Participants were given text passages, and were first asked to predict the percentage of errors they would detect in the passage. They then read the passage and circled errors (which varied in complexity and locality), and made postdictions regarding their performance, before repeating this with another passage and answering a comprehension test of both passages. There were no age-related differences in error detection accuracy, text comprehension, or metacognitive calibration, though participants in both age groups were overconfident overall in their metacognitive judgments. Both groups gave similar ratings of motivation to complete the task. The older adults rated the passages as more interesting than younger adults did, although this level of interest did not appear to influence error-detection performance. The age equivalence in both proofreading ability and calibration suggests that the ability to proofread text passages and the associated metacognitive monitoring used in judging one's own performance are maintained in aging. These age-related similarities persisted when younger adults completed the proofreading tasks on a computer screen, rather than with paper and pencil. The findings provide novel insights regarding the influence that cognitive aging may have on metacognitive accuracy and text processing in an everyday task.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-12-13
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-01-01
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387
Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C
2018-04-01
Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
Simulation and experimental research of 1MWe solar tower power plant in China
NASA Astrophysics Data System (ADS)
Yu, Qiang; Wang, Zhifeng; Xu, Ershu
2016-05-01
The establishment of a reliable simulation system for a solar tower power plant can greatly increase the economic and safety performance of the whole system. In this paper, a dynamic model of the 1MWe Solar Tower Power Plant at Badaling in Beijing is developed based on the "STAR-90" simulation platform, including the heliostat field, the central receiver system (water/steam), etc. The dynamic behavior of the global CSP plant can be simulated. In order to verify the validity of simulation system, a complete experimental process was synchronously simulated by repeating the same operating steps based on the simulation platform, including the locations and number of heliostats, the mass flow of the feed water, etc. According to the simulation and experimental results, some important parameters are taken out to make a deep comparison. The results show that there is good alignment between the simulations and the experimental results and that the error range can be acceptable considering the error of the models. In the end, a comprehensive and deep analysis on the error source is carried out according to the comparative results.
Thermally stratified squeezed flow between two vertical Riga plates with no slip conditions
NASA Astrophysics Data System (ADS)
Farooq, M.; Mansoor, Zahira; Ijaz Khan, M.; Hayat, T.; Anjum, A.; Mir, N. A.
2018-04-01
This paper demonstrates the mixed convective squeezing nanomaterials flow between two vertical plates, one of which is a Riga plate embedded in a thermally stratified medium subject to convective boundary conditions. Heat transfer features are elaborated with viscous dissipation. Single-wall and multi-wall carbon nanotubes are taken as nanoparticles to form a homogeneous solution in the water. A non-linear system of differential equations is obtained for the considered flow by using suitable transformations. Convergence analysis for velocity and temperature is computed and discussed explicitly through BVPh 2.0. Residual errors are also computed by BVPh 2.0 for the dimensionless governing equations. We introduce two undetermined convergence control parameters, i.e. \\hslash_{θ} and \\hslashf , to compute the lowest entire error. The average residual error for the k -th-order approximation is given in a table. The effects of different flow variables on temperature and velocity distributions are sketched graphically and discussed comprehensively. Furthermore the coefficient of skin friction and the Nusselt number are also analyzed through graphical data.
Safety coaches in radiology: decreasing human error and minimizing patient harm.
Dickerson, Julie M; Koch, Bernadette L; Adams, Janet M; Goodfriend, Martha A; Donnelly, Lane F
2010-09-01
Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program.
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
ERIC Educational Resources Information Center
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
da Silva, Simone Albino; Baitelo, Tamara Cristina; Fracolli, Lislaine Aparecida
2015-01-01
to evaluate the attributes of primary health care as for access; longitudinality; comprehensiveness; coordination; family counseling and community counseling in the Family Health Strategy, triangulating and comparing the views of stakeholders involved in the care process. evaluative research with a quantitative approach and cross-sectional design. Data collected using the Primary Care Assessment Tool for interviews with 527 adult clients, 34 health professionals, and 330 parents of children up to two years old, related to 33 family health teams, in eleven municipalities. Analysis conducted in the Statistical Package for Social Sciences software, with a confidence interval of 95% and error of 0.1. the three groups assessed the first contact access - accessibility with low scores. Professionals evaluated with a high score the other attributes. Clients assigned low score evaluations for the attributes: community counseling; family counseling; comprehensiveness - services rendered; comprehensiveness - available services. the quality of performance self-reported by the professionals of the Family Health Strategy is not perceived or valued by clients, and the actions and services may have been developed inappropriately or insufficiently to be apprehended by the experience of clients.
Beyond Readability: Investigating Coherence of Clinical Text for Consumers
Hetzel, Scott; Dalrymple, Prudence; Keselman, Alla
2011-01-01
Background A basic tenet of consumer health informatics is that understandable health resources empower the public. Text comprehension holds great promise for helping to characterize consumer problems in understanding health texts. The need for efficient ways to assess consumer-oriented health texts and the availability of computationally supported tools led us to explore the effect of various text characteristics on readers’ understanding of health texts, as well as to develop novel approaches to assessing these characteristics. Objective The goal of this study was to compare the impact of two different approaches to enhancing readability, and three interventions, on individuals’ comprehension of short, complex passages of health text. Methods Participants were 80 university staff, faculty, or students. Each participant was asked to “retell” the content of two health texts: one a clinical trial in the domain of diabetes mellitus, and the other typical Visit Notes. These texts were transformed for the intervention arms of the study. Two interventions provided terminology support via (1) standard dictionary or (2) contextualized vocabulary definitions. The third intervention provided coherence improvement. We assessed participants’ comprehension of the clinical texts through propositional analysis, an open-ended questionnaire, and analysis of the number of errors made. Results For the clinical trial text, the effect of text condition was not significant in any of the comparisons, suggesting no differences in recall, despite the varying levels of support (P = .84). For the Visit Note, however, the difference in the median total propositions recalled between the Coherent and the (Original + Dictionary) conditions was significant (P = .04). This suggests that participants in the Coherent condition recalled more of the original Visit Notes content than did participants in the Original and the Dictionary conditions combined. However, no difference was seen between (Original + Dictionary) and Vocabulary (P = .36) nor Coherent and Vocabulary (P = .62). No statistically significant effect of any document transformation was found either in the open-ended questionnaire (clinical trial: P = .86, Visit Note: P = .20) or in the error rate (clinical trial: P = .47, Visit Note: P = .25). However, post hoc power analysis suggested that increasing the sample size by approximately 6 participants per condition would result in a significant difference for the Visit Note, but not for the clinical trial text. Conclusions Statistically, the results of this study attest that improving coherence has a small effect on consumer comprehension of clinical text, but the task is extremely labor intensive and not scalable. Further research is needed using texts from more diverse clinical domains and more heterogeneous participants, including actual patients. Since comprehensibility of clinical text appears difficult to automate, informatics support tools may most productively support the health care professionals tasked with making clinical information understandable to patients. PMID:22138127
Halting in Single Word Production: A Test of the Perceptual Loop Theory of Speech Monitoring
ERIC Educational Resources Information Center
Slevc, L. Robert; Ferreira, Victor S.
2006-01-01
The "perceptual loop theory" of speech monitoring (Levelt, 1983) claims that inner and overt speech are monitored by the comprehension system, which detects errors by comparing the comprehension of formulated utterances to originally intended utterances. To test the perceptual loop monitor, speakers named pictures and sometimes attempted to halt…
Sensitivity analysis for high-contrast missions with segmented telescopes
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn
2017-09-01
Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.
Yingying, Zhang; Jiancheng, Lai; Cheng, Yin; Zhenhua, Li
2009-03-01
The dependence of the surface plasmon resonance (SPR) phase difference curve on the complex refractive index of a sample in Kretschmann configuration is discussed comprehensively, based on which a new method is proposed to measure the complex refractive index of turbid liquid. A corresponding experiment setup was constructed to measure the SPR phase difference curve, and the complex refractive index of turbid liquid was determined. By using the setup, the complex refractive indices of Intralipid solutions with concentrations of 5%, 10%, 15%, and 20% are obtained to be 1.3377+0.0005 i, 1.3427+0.0028 i, 1.3476+0.0034 i, and 1.3496+0.0038 i, respectively. Furthermore, the error analysis indicates that the root-mean-square errors of both the real and the imaginary parts of the measured complex refractive index are less than 5x10(-5).
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Gesture production and comprehension in children with specific language impairment.
Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary
2010-03-01
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.
Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann
2008-01-01
Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151
NASA Astrophysics Data System (ADS)
Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang
2018-01-01
With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.
Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan
2018-01-12
Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).
Shohaimi, Shamarina; Yoke Wei, Wong; Mohd Shariff, Zalilah
2014-01-01
Comprehensive feeding practices questionnaire (CFPQ) is an instrument specifically developed to evaluate parental feeding practices. It has been confirmed among children in America and applied to populations in France, Norway, and New Zealand. In order to extend the application of CFPQ, we conducted a factor structure validation of the translated version of CFPQ (CFPQ-M) using confirmatory factor analysis among mothers of primary school children (N = 397) in Malaysia. Several items were modified for cultural adaptation. Of 49 items, 39 items with loading factors >0.40 were retained in the final model. The confirmatory factor analysis revealed that the final model (twelve-factor model with 39 items and 2 error covariances) displayed the best fit for our sample (Chi-square = 1147; df = 634; P < 0.05; CFI = 0.900; RMSEA = 0.045; SRMR = 0.0058). The instrument with some modifications was confirmed among mothers of school children in Malaysia. The present study extends the usability of the CFPQ and enables researchers and parents to better understand the relationships between parental feeding practices and related problems such as childhood obesity. PMID:25538958
Shi, Lu-Feng; Morozova, Natalia
2012-08-01
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
Detection and clustering of features in aerial images by neuron network-based algorithm
NASA Astrophysics Data System (ADS)
Vozenilek, Vit
2015-12-01
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
Patient safety education at Japanese medical schools: results of a nationwide survey
2012-01-01
Background Patient safety education, including error prevention strategies and management of adverse events, has become a topic of worldwide concern. The importance of the patient safety is also recognized in Japan following two serious medical accidents in 1999. Furthermore, educational curriculum guideline revisions in 2008 by relevant the Ministry of Education includes patient safety as part of the core medical curriculum. However, little is known about the patient safety education in Japanese medical schools partly because a comprehensive study has not yet been conducted in this field. Therefore, we have conducted a nationwide survey in order to clarify the current status of patient safety education at medical schools in Japan. Results Response rate was 60.0% (n = 48/80). Ninety-eight-percent of respondents (n = 47/48) reported integration of patient safety education into their curricula. Thirty-nine percent reported devoting less than five hours to the topic. All schools that teach patient safety reported use of lecture based teaching methods while few used alternative methods, such as role-playing or in-hospital training. Topics related to medical error theory and legal ramifications of error are widely taught while practical topics related to error analysis such as root cause analysis are less often covered. Conclusions Based on responses to our survey, most Japanese medical schools have incorporated the topic of patient safety into their curricula. However, the number of hours devoted to the patient safety education is far from the sufficient level with forty percent of medical schools that devote five hours or less to it. In addition, most medical schools employ only the lecture based learning, lacking diversity in teaching methods. Although most medical schools cover basic error theory, error analysis is taught at fewer schools. We still need to make improvements to our medical safety curricula. We believe that this study has the implications for the rest of the world as a model of what is possible and a sounding board for what topics might be important. PMID:22574712
Inhibitory Control during Sentence Comprehension in Individuals with Dementia of the Alzheimer Type
Faust, Mark E.; Balota, David A.; Duchek, Janet M.; Gernsbacher, Morton Ann; Smith, Stan
2015-01-01
In two experiments we investigated the extent to which individuals with dementia of the Alzheimer type (OAT) manage the activation of contextually appropriate and inappropriate meanings of ambiguous words during sentence comprehension. OAT individuals and healthy older individuals read sentences that ended in ambiguous words and then determined if a test word fit the overall meaning of the sentence. Analysis of response latencies indicated that OAT individuals were less efficient than healthy older individuals at suppressing inappropriate meanings of ambiguous words not implied by sentence context, but enhanced appropriate meanings to the same extent, if not more, than healthy older adults. DAT individuals were also more likely to allow inappropriate information to actually drive responses (i.e., increased intrusion errors). Overall, the results are consistent with a growing number of studies demonstrating impairments in inhibitory control, with relative preservation offacilitatory processes, in DAT. PMID:9126415
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
ERIC Educational Resources Information Center
Westfall, John M.; Fernald, Douglas H.; Staton, Elizabeth W.; VanVorst, Rebecca; West, David; Pace, Wilson D.
2004-01-01
Medical errors and patient safety have gained increasing attention throughout all areas of medical care. Understanding patient safety in rural settings is crucial for improving care in rural communities. To describe a system to decrease medical errors and improve care in rural and frontier primary care offices. Applied Strategies for Improving…
Stop! Look & Lesson: A Guide to Identifying and Correcting Common Mathematical Errors Strategies.
ERIC Educational Resources Information Center
Palmer, Don; And Others
This book provides a comprehensive collection of 66 teaching strategies and ideas to help overcome problems with number, each linked to a specific kind of error described in the related manual. Most of these strategies are classroom-ready and easily implemented. Some are notes for the teacher to read and then plan activities accordingly, and many…
Altman, Carmit; Goldstein, Tamara; Armon-Lotem, Sharon
2017-01-01
While bilingual children follow the same milestones of language acquisition as monolingual children do in learning the syntactic patterns of their second language (L2), their vocabulary size in L2 often lags behind compared to monolinguals. The present study explores the comprehension and production of nouns and verbs in Hebrew, by two groups of 5- to 6-year olds with typical language development: monolingual Hebrew speakers (N = 26), and Russian-Hebrew bilinguals (N = 27). Analyses not only show quantitative gaps between comprehension and production and between nouns and verbs, with a bilingual effect in both, but also a qualitative difference between monolinguals and bilinguals in their production errors: monolinguals' errors reveal knowledge of the language rules despite temporary access difficulties, while bilinguals' errors reflect gaps in their knowledge of Hebrew (L2). The nature of Hebrew as a Semitic language allows one to explore this qualitative difference in the semantic and morphological level.
A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.
Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B
2013-09-01
To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.
Performance analysis of a new positron camera geometry for high speed, fine particle tracking
NASA Astrophysics Data System (ADS)
Sovechles, J. M.; Boucher, D.; Pax, R.; Leadbeater, T.; Sasmito, A. P.; Waters, K. E.
2017-09-01
A new positron camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m · s-1. For lower activity tracers (<10-2 MBq), the error was more sensitive to increases in speed, increasing to 28 mm (at 4 m · s-1), indicating that at these conditions a reliable trajectory is not possible. These results expanded on, but correlated well with, previous literature that only contained location errors for tracer speeds up to 1.5 m · s-1. The camera was also used to track directly activated mineral particles inside a two-inch hydrocyclone and a 142 mm diameter flotation cell. A detailed trajectory, inside the hydrocyclone, of a -212 + 106 µm (10-1 MBq) quartz particle displayed the expected spiralling motion towards the apex. This was the first time a mineral particle of this size had been successfully traced within a hydrocyclone, however more work is required to develop detailed velocity fields.
Acquired dyslexia in Serbian speakers with Broca's and Wernicke's aphasia.
Vuković, Mile; Vuković, Irena; Miller, Nick
2016-01-01
This study examined patterns of acquired dyslexia in Serbian aphasic speakers, comparing profiles of groups with Broca's versus Wernicke's aphasia. The study also looked at the relationship of reading and auditory comprehension and between reading comprehension and reading aloud in these groups. Participants were 20 people with Broca's and 20 with Wernicke's aphasia. They were asked to read aloud and to understand written material from the Serbian adaptation of the Boston Diagnostic Aphasia Examination. A Serbian Word Reading Aloud Test was also used. The people with Broca's aphasia achieved better results in reading aloud and in reading comprehension than those with Wernicke's aphasia. Those with Wernicke's aphasia showed significantly more semantic errors than those with Broca's aphasia who had significantly more morphological and phonological errors. From the data we inferred that lesion sites accorded with previous work on networks associated with Broca's and Wernicke's aphasia and with a posterior-anterior axis for reading processes centred on (left) parietal-temporal-frontal lobes. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Jensen De Lopez, Kristine; Olsen, Lone Sundahl; Chondrogianni, Vasiliki
2014-01-01
This study examines the comprehension and production of subject and object relative clauses (SRCs, ORCs) by children with Specific Language Impairment (SLI) and their typically developing (TD) peers. The purpose is to investigate whether relative clauses are problematic for Danish children with SLI and to compare errors with those produced by TD…
Evidence on the Effectiveness of Comprehensive Error Correction in Second Language Writing
ERIC Educational Resources Information Center
Van Beuningen, Catherine G.; De Jong, Nivja H.; Kuiken, Folkert
2012-01-01
This study investigated the effect of direct and indirect comprehensive corrective feedback (CF) on second language (L2) learners' written accuracy (N = 268). The study set out to explore the value of CF as a revising tool as well as its capacity to support long-term accuracy development. In addition, we tested Truscott's (e.g., 2001, 2007) claims…
NASA Astrophysics Data System (ADS)
Ma, Yingzhao; Yang, Yuan; Han, Zhongying; Tang, Guoqiang; Maguire, Lane; Chu, Zhigang; Hong, Yang
2018-01-01
The objective of this study is to comprehensively evaluate the new Ensemble Multi-Satellite Precipitation Dataset using the Dynamic Bayesian Model Averaging scheme (EMSPD-DBMA) at daily and 0.25° scales from 2001 to 2015 over the Tibetan Plateau (TP). Error analysis against gauge observations revealed that EMSPD-DBMA captured the spatiotemporal pattern of daily precipitation with an acceptable Correlation Coefficient (CC) of 0.53 and a Relative Bias (RB) of -8.28%. Moreover, EMSPD-DBMA outperformed IMERG and GSMaP-MVK in almost all metrics in the summers of 2014 and 2015, with the lowest RB and Root Mean Square Error (RMSE) values of -2.88% and 8.01 mm/d, respectively. It also better reproduced the Probability Density Function (PDF) in terms of daily rainfall amount and estimated moderate and heavy rainfall better than both IMERG and GSMaP-MVK. Further, hydrological evaluation with the Coupled Routing and Excess STorage (CREST) model in the Upper Yangtze River region indicated that the EMSPD-DBMA forced simulation showed satisfying hydrological performance in terms of streamflow prediction, with Nash-Sutcliffe coefficient of Efficiency (NSE) values of 0.82 and 0.58, compared to gauge forced simulation (0.88 and 0.60) at the calibration and validation periods, respectively. EMSPD-DBMA also performed a greater fitness for peak flow simulation than a new Multi-Source Weighted-Ensemble Precipitation Version 2 (MSWEP V2) product, indicating a promising prospect of hydrological utility for the ensemble satellite precipitation data. This study belongs to early comprehensive evaluation of the blended multi-satellite precipitation data across the TP, which would be significant for improving the DBMA algorithm in regions with complex terrain.
Methods for estimating streamflow at mountain fronts in southern New Mexico
Waltemeyer, S.D.
1994-01-01
The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.
Speech errors of amnesic H.M.: unlike everyday slips-of-the-tongue.
MacKay, Donald G; James, Lori E; Hadley, Christopher B; Fogler, Kethera A
2011-03-01
Three language production studies indicate that amnesic H.M. produces speech errors unlike everyday slips-of-the-tongue. Study 1 was a naturalistic task: H.M. and six controls closely matched for age, education, background and IQ described what makes captioned cartoons funny. Nine judges rated the descriptions blind to speaker identity and gave reliably more negative ratings for coherence, vagueness, comprehensibility, grammaticality, and adequacy of humor-description for H.M. than the controls. Study 2 examined "major errors", a novel type of speech error that is uncorrected and reduces the coherence, grammaticality, accuracy and/or comprehensibility of an utterance. The results indicated that H.M. produced seven types of major errors reliably more often than controls: substitutions, omissions, additions, transpositions, reading errors, free associations, and accuracy errors. These results contradict recent claims that H.M. retains unconscious or implicit language abilities and produces spoken discourse that is "sophisticated," "intact" and "without major errors." Study 3 examined whether three classical types of errors (omissions, additions, and substitutions of words and phrases) differed for H.M. versus controls in basic nature and relative frequency by error type. The results indicated that omissions, and especially multi-word omissions, were relatively more common for H.M. than the controls; and substitutions violated the syntactic class regularity (whereby, e.g., nouns substitute with nouns but not verbs) relatively more often for H.M. than the controls. These results suggest that H.M.'s medial temporal lobe damage impaired his ability to rapidly form new connections between units in the cortex, a process necessary to form complete and coherent internal representations for novel sentence-level plans. In short, different brain mechanisms underlie H.M.'s major errors (which reflect incomplete and incoherent sentence-level plans) versus everyday slips-of-the tongue (which reflect errors in activating pre-planned units in fully intact sentence-level plans). Implications of the results of Studies 1-3 are discussed for systems theory, binding theory and relational memory theories. Copyright © 2010 Elsevier Srl. All rights reserved.
CO2 laser ranging systems study
NASA Technical Reports Server (NTRS)
Filippi, C. A.
1975-01-01
The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
Apel, Kenn; Masterson, Julie J
2015-04-01
The purpose of this study was to determine whether students with and without hearing loss (HL) differed in their spelling abilities and, specifically, in the underlying linguistic awareness skills that support spelling ability. Furthermore, we examined whether there were differences between the two groups in the relationship between reading and spelling. We assessed the spelling, word-level reading, and reading comprehension skills of nine students with cochlear implants and nine students with typical hearing who were matched for reading age. The students' spellings were analyzed to determine whether the misspellings were due to errors with phonemic awareness, orthographic pattern or morphological awareness, or poor mental graphemic representations. The students with HL demonstrated markedly less advanced spelling abilities than the students with typical hearing. For the students with HL, the misspellings were primarily due to deficiencies in orthographic pattern and morphological awareness. Correlations between measures of spelling and both real word reading and reading comprehension were lower for the students with HL. With additional investigations using a similar approach to spelling analysis that captures the underlying causes for spelling errors, researchers will better understand the linguistic awareness abilities that students with HL bring to the task of reading and spelling. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.
Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep
2016-04-01
Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Self-organizing neural networks--an alternative way of cluster analysis in clinical chemistry.
Reibnegger, G; Wachter, H
1996-04-15
Supervised learning schemes have been employed by several workers for training neural networks designed to solve clinical problems. We demonstrate that unsupervised techniques can also produce interesting and meaningful results. Using a data set on the chemical composition of milk from 22 different mammals, we demonstrate that self-organizing feature maps (Kohonen networks) as well as a modified version of error backpropagation technique yield results mimicking conventional cluster analysis. Both techniques are able to project a potentially multi-dimensional input vector onto a two-dimensional space whereby neighborhood relationships remain conserved. Thus, these techniques can be used for reducing dimensionality of complicated data sets and for enhancing comprehensibility of features hidden in the data matrix.
Production and Comprehension of Time Reference in Korean Nonfluent Aphasia
Lee, Jiyeon; Kwon, Miseon; Na, Hae Ri; Bastiaanse, Roelien; Thompson, Cynthia K.
2015-01-01
Objectives Individuals with nonfluent agrammatic aphasia show impaired production and comprehension of time reference via verbal morphology. However, cross-linguistic findings to date suggest inconsistent evidence as to whether tense processing in general is impaired or time reference to the past is selectively difficult in this population. This study examined production and comprehension of time reference via verb morphology in Korean-speaking individuals with nonfluent aphasia. Methods A group of 9 healthy controls and 8 individuals with nonfluent aphasia (5 for the production task) participated in the study. Sentence priming production and auditory sentence to picture matching tasks were used, parallel with the previous cross-linguistic experiments in English, Chinese, Turkish, and others. Results The participants with nonfluent aphasia showed different patterns of impairment in production and comprehension. In production, they were impaired in all time references with errors being dominated by substitution of incorrect time references and other morpho-phonologically well-formed errors, indicating a largely intact morphological affixation process. In comprehension, they showed selective impairment of the past, consistent with the cross-linguistic evidence from English, Chinese, Turkish, and others. Conclusion The findings suggest that interpretation of past time reference poses particular difficulty in nonfluent aphasia irrespective of typological characteristics of languages; however, in production, language-specific morpho-semantic functions of verbal morphology may play a significant role in selective breakdowns of time reference. PMID:26290861
Hunt, Randall J.
2012-01-01
Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.
Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.
Gibson, Edward; Bergen, Leon; Piantadosi, Steven T
2013-05-14
Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel.
Rational integration of noisy evidence and prior semantic expectations in sentence interpretation
Gibson, Edward; Bergen, Leon; Piantadosi, Steven T.
2013-01-01
Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be “well designed”–in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian “size principle”; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel. PMID:23637344
Comprehension deficits among older patients in a quick diagnostic unit.
Hvidt, Lisa Nebelin; Hvidt, Kristian Nebelin; Madsen, Kim; Schmidt, Thomas A
2014-01-01
Higher prevalence of multiple illnesses and cognitive impairment among older patients pose a risk of comprehension difficulties, potentially leading to medication errors. Therefore, the objective of this study was to investigate comprehension of discharge instructions among older patients admitted to a Quick Diagnostic Unit (QDU). One hundred and two patients discharged from the QDU answered a questionnaire covering understanding of their hospitalization and discharge plan. Patients' ability to recall discharge instructions and awareness of comprehension deficits, ie, ability to identify the misconceived information, were evaluated by comparing the questionnaires with the discharge letters. The population was divided into an older group (age ≥65 years) and a younger group. The older group (n=40) was less able to recall correct medication instructions when compared to the younger group (54% versus 78%, respectively; P=0.02). In multiple logistic regression analysis, correct recall of medication instructions was 4.2 times higher for the younger group compared to the older group (odds ratio 4.2, 95% confidence interval 1.5-11.9, P=0.007) when adjusted for sex and education. The older patients were less aware of their own comprehension deficits, and in respect to medication instructions awareness decreased 6.1% for each additional year of age (odds ratio 0.939, 95% confidence interval 0.904-0.98, P=0.001) when adjusted for sex and education. Older patients were less able to recall correct medication instructions and less aware of their comprehension deficits after discharge from a QDU. The findings of the present study emphasize the importance of thorough communication and follow-up when treating older patients.
Adaptive Estimation of Multiple Fading Factors for GPS/INS Integrated Navigation Systems.
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2017-06-01
The Kalman filter has been widely applied in the field of dynamic navigation and positioning. However, its performance will be degraded in the presence of significant model errors and uncertain interferences. In the literature, the fading filter was proposed to control the influences of the model errors, and the H-infinity filter can be adopted to address the uncertainties by minimizing the estimation error in the worst case. In this paper, a new multiple fading factor, suitable for the Global Positioning System (GPS) and the Inertial Navigation System (INS) integrated navigation system, is proposed based on the optimization of the filter, and a comprehensive filtering algorithm is constructed by integrating the advantages of the H-infinity filter and the proposed multiple fading filter. Measurement data of the GPS/INS integrated navigation system are collected under actual conditions. Stability and robustness of the proposed filtering algorithm are tested with various experiments and contrastive analysis are performed with the measurement data. Results demonstrate that both the filter divergence and the influences of outliers are restrained effectively with the proposed filtering algorithm, and precision of the filtering results are improved simultaneously.
Tadpole-improved SU(2) lattice gauge theory
NASA Astrophysics Data System (ADS)
Shakespeare, Norman H.; Trottier, Howard D.
1999-01-01
A comprehensive analysis of tadpole-improved SU(2) lattice gauge theory is made. Simulations are done on isotropic and anisotropic lattices, with and without improvement. Two tadpole renormalization schemes are employed, one using average plaquettes, the other using mean links in the Landau gauge. Simulations are done with spatial lattice spacings as in the range of about 0.1-0.4 fm. Results are presented for the static quark potential, the renormalized lattice anisotropy at/as (where at is the ``temporal'' lattice spacing), and for the scalar and tensor glueball masses. Tadpole improvement significantly reduces discretization errors in the static quark potential and in the scalar glueball mass, and results in very little renormalization of the bare anisotropy that is input to the action. We also find that tadpole improvement using mean links in the Landau gauge results in smaller discretization errors in the scalar glueball mass (as well as in the static quark potential), compared to when average plaquettes are used. The possibility is also raised that further improvement in the scalar glueball mass may result when the coefficients of the operators which correct for discretization errors in the action are computed beyond the tree level.
Differences among Job Positions Related to Communication Errors at Construction Sites
NASA Astrophysics Data System (ADS)
Takahashi, Akiko; Ishida, Toshiro
In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.
New method for designing serial resonant power converters
NASA Astrophysics Data System (ADS)
Hinov, Nikolay
2017-12-01
In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.
Zdeněk Kopal: Numerical Analyst
NASA Astrophysics Data System (ADS)
Křížek, M.
2015-07-01
We give a brief overview of Zdeněk Kopal's life, his activities in the Czech Astronomical Society, his collaboration with Vladimír Vand, and his studies at Charles University, Cambridge, Harvard, and MIT. Then we survey Kopal's professional life. He published 26 monographs and 20 conference proceedings. We will concentrate on Kopal's extensive monograph Numerical Analysis (1955, 1961) that is widely accepted to be the first comprehensive textbook on numerical methods. It describes, for instance, methods for polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations with initial or boundary conditions, and numerical solution of integral and integro-differential equations. Special emphasis will be laid on error analysis. Kopal himself applied numerical methods to celestial mechanics, in particular to the N-body problem. He also used Fourier analysis to investigate light curves of close binaries to discover their properties. This is, in fact, a problem from mathematical analysis.
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.
NASA Astrophysics Data System (ADS)
Gao, H.; Zhang, S.; Nijssen, B.; Zhou, T.; Voisin, N.; Sheffield, J.; Lee, K.; Shukla, S.; Lettenmaier, D. P.
2017-12-01
Despite its errors and uncertainties, the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis real-time product (TMPA-RT) has been widely used for hydrological monitoring and forecasting due to its timely availability for real-time applications. To evaluate the utility of TMPA-RT in hydrologic predictions, many studies have compared modeled streamflows driven by TMPA-RT against gauge data. However, because of the limited availability of streamflow observations in data sparse regions, there is still a lack of comprehensive comparisons for TMPA-RT based hydrologic predictions at the global scale. Furthermore, it is expected that its skill is less optimal at the subbasin scale than the basin scale. In this study, we evaluate and characterize the utility of the TMPA-RT product over selected global river basins during the period of 1998 to 2015 using the TMPA research product (TMPA-RP) as a reference. The Variable Infiltration Capacity (VIC) model, which was calibrated and validated previously, is adopted to simulate streamflows driven by TMPA-RT and TMPA-RP, respectively. The objective of this study is to analyze the spatial and temporal characteristics of the hydrologic predictions by answering the following questions: (1) How do the precipitation errors associated with the TMPA-RT product transform into streamflow errors with respect to geographical and climatological characteristics? (2) How do streamflow errors vary across scales within a basin?
Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)
1999-01-01
A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
Caetano, J V; Percin, M; van Oudheusden, B W; Remes, B; de Wagter, C; de Croon, G C H E; de Visser, C C
2015-08-20
An accurate knowledge of the unsteady aerodynamic forces acting on a bio-inspired, flapping-wing micro air vehicle (FWMAV) is crucial in the design development and optimization cycle. Two different types of experimental approaches are often used: determination of forces from position data obtained from external optical tracking during free flight, or direct measurements of forces by attaching the FWMAV to a force transducer in a wind-tunnel. This study compares the quality of the forces obtained from both methods as applied to a 17.4 gram FWMAV capable of controlled flight. A comprehensive analysis of various error sources is performed. The effects of different factors, e.g., measurement errors, error propagation, numerical differentiation, filtering frequency selection, and structural eigenmode interference, are assessed. For the forces obtained from free flight experiments it is shown that a data acquisition frequency below 200 Hz and an accuracy in the position measurements lower than ± 0.2 mm may considerably hinder determination of the unsteady forces. In general, the force component parallel to the fuselage determined by the two methods compares well for identical flight conditions; however, a significant difference was observed for the forces along the stroke plane of the wings. This was found to originate from the restrictions applied by the clamp to the dynamic oscillations observed in free flight and from the structural resonance of the clamped FWMAV structure, which generates loads that cannot be distinguished from the external forces. Furthermore, the clamping position was found to have a pronounced influence on the eigenmodes of the structure, and this effect should be taken into account for accurate force measurements.
High-quality two-nucleon potentials up to fifth order of the chiral expansion
NASA Astrophysics Data System (ADS)
Entem, D. R.; Machleidt, R.; Nosyk, Y.
2017-08-01
We present NN potentials through five orders of chiral effective field theory ranging from leading order (LO) to next-to-next-to-next-to-next-to-leading order (N4LO ). The construction may be perceived as consistent in the sense that the same power counting scheme as well as the same cutoff procedures are applied in all orders. Moreover, the long-range parts of these potentials are fixed by the very accurate π N low-energy constants (LECs) as determined in the Roy-Steiner equations analysis by Hoferichter, Ruiz de Elvira, and coworkers. In fact, the uncertainties of these LECs are so small that a variation within the errors leads to effects that are essentially negligible, reducing the error budget of predictions considerably. The NN potentials are fit to the world NN data below the pion-production threshold of the year 2016. The potential of the highest order (N4LO ) reproduces the world NN data with the outstanding χ2/datum of 1.15, which is the highest precision ever accomplished for any chiral NN potential to date. The NN potentials presented may serve as a solid basis for systematic ab initio calculations of nuclear structure and reactions that allow for a comprehensive error analysis. In particular, the consistent order by order development of the potentials will make possible a reliable determination of the truncation error at each order. Our family of potentials is nonlocal and, generally, of soft character. This feature is reflected in the fact that the predictions for the triton binding energy (from two-body forces only) converges to about 8.1 MeV at the highest orders. This leaves room for three-nucleon-force contributions of moderate size.
Consistent, high-quality two-nucleon potentials up to fifth order of the chiral expansion
NASA Astrophysics Data System (ADS)
Machleidt, R.
2018-02-01
We present N N potentials through five orders of chiral effective field theory ranging from leading order (LO) to next-to-next-to-next-to-next-to-leading order (N4LO). The construction may be perceived as consistent in the sense that the same power counting scheme as well as the same cutoff procedures are applied in all orders. Moreover, the long-range parts of these potentials are fixed by the very accurate πN low-energy constants (LECs) as determined in the Roy-Steiner equations analysis by Hoferichter, Ruiz de Elvira and coworkers. In fact, the uncertainties of these LECs are so small that a variation within the errors leads to effects that are essentially negligible, reducing the error budget of predictions considerably. The N N potentials are fit to the world N N data below pion-production threshold of the year of 2016. The potential of the highest order (N4LO) reproduces the world N N data with the outstanding χ 2/datum of 1.15, which is the highest precision ever accomplished for any chiral N N potential to date. The N N potentials presented may serve as a solid basis for systematic ab initio calculations of nuclear structure and reactions that allow for a comprehensive error analysis. In particular, the consistent order by order development of the potentials will make possible a reliable determination of the truncation error at each order. Our family of potentials is non-local and, generally, of soft character. This feature is reflected in the fact that the predictions for the triton binding energy (from two-body forces only) converges to about 8.1 MeV at the highest orders. This leaves room for three-nucleon-force contributions of moderate size.
Developing and Implementing the Data Mining Algorithms in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less
Visual saliency detection based on in-depth analysis of sparse representation
NASA Astrophysics Data System (ADS)
Wang, Xin; Shen, Siqiu; Ning, Chen
2018-03-01
Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Yu, Chengxin; Ding, Xinhua
2018-01-01
In this study, digital photography is used to monitor the instantaneous deformation of a masonry wall in seismic oscillation. In order to obtain higher measurement accuracy, the image matching-time baseline parallax method (IM-TBPM) is used to correct errors caused by the change of intrinsic and extrinsic parameters of digital cameras. Results show that the average errors of control point C5 are 0.79mm, 0.44mm and 0.96mm in X, Z and comprehensive direction, respectively. The average errors of control point C6 are 0.49mm, 0.44mm and 0.71mm in X, Z and comprehensive direction, respectively. These suggest that IM-TBPM can meet the accuracy requirements of instantaneous deformation monitoring. In seismic oscillation the middle to lower of the masonry wall develops cracks firstly. Then the shear failure occurs on the middle of masonry wall. This study provides technical basis for analyzing the crack development pattern of masonry structure in seismic oscillation and have significant implications for improved construction of masonry structures in earthquake prone areas.
Kehimkar, Benjamin; Parsons, Brendon A; Hoggard, Jamin C; Billingsley, Matthew C; Bruno, Thomas J; Synovec, Robert E
2015-01-01
Recent efforts in predicting rocket propulsion (RP-1) fuel performance through modeling put greater emphasis on obtaining detailed and accurate fuel properties, as well as elucidating the relationships between fuel compositions and their properties. Herein, we study multidimensional chromatographic data obtained by comprehensive two-dimensional gas chromatography combined with time-of-flight mass spectrometry (GC × GC-TOFMS) to analyze RP-1 fuels. For GC × GC separations, RTX-Wax (polar stationary phase) and RTX-1 (non-polar stationary phase) columns were implemented for the primary and secondary dimensions, respectively, to separate the chemical compound classes (alkanes, cycloalkanes, aromatics, etc.), providing a significant level of chemical compositional information. The GC × GC-TOFMS data were analyzed using partial least squares regression (PLS) chemometric analysis to model and predict advanced distillation curve (ADC) data for ten RP-1 fuels that were previously analyzed using the ADC method. The PLS modeling provides insight into the chemical species that impact the ADC data. The PLS modeling correlates compositional information found in the GC × GC-TOFMS chromatograms of each RP-1 fuel, and their respective ADC, and allows prediction of the ADC for each RP-1 fuel with good precision and accuracy. The root-mean-square error of calibration (RMSEC) ranged from 0.1 to 0.5 °C, and was typically below ∼0.2 °C, for the PLS calibration of the ADC modeling with GC × GC-TOFMS data, indicating a good fit of the model to the calibration data. Likewise, the predictive power of the overall method via PLS modeling was assessed using leave-one-out cross-validation (LOOCV) yielding root-mean-square error of cross-validation (RMSECV) ranging from 1.4 to 2.6 °C, and was typically below ∼2.0 °C, at each % distilled measurement point during the ADC analysis.
Quaid, Patrick; Simpson, Trefford
2013-01-01
Approximately one in ten students aged 6 to 16 in Ontario (Canada) school boards have an individual education plan (IEP) in place due to various learning disabilities, many of which are specific to reading difficulties. The relationship between reading (specifically objectively determined reading speed and eye movement data), refractive error, and binocular vision related clinical measurements remain elusive. One hundred patients were examined in this study (50 IEP and 50 controls, age range 6 to 16 years). IEP patients were referred by three local school boards, with controls being recruited from the routine clinic population (non-IEP patients in the same age group). A comprehensive eye examination was performed on all subjects, in addition to a full binocular vision work-up and cycloplegic refraction. In addition to the cycloplegic refractive error, the following binocular vision related data was also acquired: vergence facility, vergence amplitudes, accommodative facility, accommodative amplitudes, near point of convergence, stereopsis, and a standardized symptom scoring scale. Both the IEP and control groups were also examined using the Visagraph III system, which permits recording of the following reading parameters objectively: (i) reading speed, both raw values and values compared to grade normative data, and (ii) the number of eye movements made per 100 words read. Comprehension was assessed via a questionnaire administered at the end of the reading task, with each subject requiring 80% or greater comprehension. The IEP group had significantly greater hyperopia compared to the control group on cycloplegic examination. Vergence facility was significantly correlated to (i) reading speed, (ii) number of eye movements made when reading, and (iii) a standardized symptom scoring system. Vergence facility was also significantly reduced in the IEP group versus controls. Significant differences in several other binocular vision related scores were also found. This research indicates there are significant associations between reading speed, refractive error, and in particular vergence facility. It appears sensible that students being considered for reading specific IEP status should have a full eye examination (including cycloplegia), in addition to a comprehensive binocular vision evaluation.
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
Impact of specific language impairment and type of school on different language subsystems.
Puglisi, Marina Leite; Befi-Lopes, Debora Maria
2016-01-01
This study aimed to explore quantitative and qualitative effects of type of school and specific language impairment (SLI) on different language abilities. 204 Brazilian children aged from 4 to 6 years old participated in the study. Children were selected to form three groups: 1) 63 typically developing children studying in private schools (TDPri); 2) 102 typically developing children studying in state schools (TDSta); and 39 children with SLI studying in state schools (SLISta). All individuals were assessed regarding expressive vocabulary, number morphology and morphosyntactic comprehension. All language subsystems were vulnerable to both environmental (type of school) and biological (SLI) effects. The relationship between the three language measures was exactly the same to all groups: vocabulary growth correlated with age and with the development of morphological abilities and morphosyntactic comprehension. Children with SLI showed atypical errors in the comprehension test at the age of 4, but presented a pattern of errors that gradually resembled typical development. The effect of type of school was marked by quantitative differences, while the effect of SLI was characterised by both quantitative and qualitative differences.
Sign language ability in young deaf signers predicts comprehension of written sentences in English.
Andrew, Kathy N; Hoshooley, Jennifer; Joanisse, Marc F
2014-01-01
We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into 'skilled' and 'less-skilled' signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages.
Sign Language Ability in Young Deaf Signers Predicts Comprehension of Written Sentences in English
Andrew, Kathy N.; Hoshooley, Jennifer; Joanisse, Marc F.
2014-01-01
We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into ‘skilled’ and ‘less-skilled’ signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages. PMID:24587174
Comprehension of metaphors and idioms in patients with Alzheimer's disease: a longitudinal study.
Papagno, C
2001-07-01
Language in patients with Alzheimer's disease has been extensively studied, with the exception of non-literal language comprehension. However, in our speech, we often make use of expressions, which are not necessarily interpreted on a literal ground. Comprehension of metaphors and idioms was examined in 39 patients with probable early Alzheimer's disease. The results showed that the decline of figurative language is not an early symptom of dementia and can occur independently from the impairment of propositional language. It was also found that metaphors and idioms differ as far as the predominant kind of error is concerned.
Primary care physicians' use of an electronic medical record system: a cognitive task analysis.
Shachak, Aviv; Hadas-Dayagi, Michal; Ziv, Amitai; Reis, Shmuel
2009-03-01
To describe physicians' patterns of using an Electronic Medical Record (EMR) system; to reveal the underlying cognitive elements involved in EMR use, possible resulting errors, and influences on patient-doctor communication; to gain insight into the role of expertise in incorporating EMRs into clinical practice in general and communicative behavior in particular. Cognitive task analysis using semi-structured interviews and field observations. Twenty-five primary care physicians from the northern district of the largest health maintenance organization (HMO) in Israel. The comprehensiveness, organization, and readability of data in the EMR system reduced physicians' need to recall information from memory and the difficulty of reading handwriting. Physicians perceived EMR use as reducing the cognitive load associated with clinical tasks. Automaticity of EMR use contributed to efficiency, but sometimes resulted in errors, such as the selection of incorrect medication or the input of data into the wrong patient's chart. EMR use interfered with patient-doctor communication. The main strategy for overcoming this problem involved separating EMR use from time spent communicating with patients. Computer mastery and enhanced physicians' communication skills also helped. There is a fine balance between the benefits and risks of EMR use. Automaticity, especially in combination with interruptions, emerged as the main cognitive factor contributing to errors. EMR use had a negative influence on communication, a problem that can be partially addressed by improving the spatial organization of physicians' offices and by enhancing physicians' computer and communication skills.
Primary Care Physicians’ Use of an Electronic Medical Record System: A Cognitive Task Analysis
Hadas-Dayagi, Michal; Ziv, Amitai; Reis, Shmuel
2009-01-01
OBJECTIVE To describe physicians’ patterns of using an Electronic Medical Record (EMR) system; to reveal the underlying cognitive elements involved in EMR use, possible resulting errors, and influences on patient–doctor communication; to gain insight into the role of expertise in incorporating EMRs into clinical practice in general and communicative behavior in particular. DESIGN Cognitive task analysis using semi-structured interviews and field observations. PARTICIPANTS Twenty-five primary care physicians from the northern district of the largest health maintenance organization (HMO) in Israel. RESULTS The comprehensiveness, organization, and readability of data in the EMR system reduced physicians’ need to recall information from memory and the difficulty of reading handwriting. Physicians perceived EMR use as reducing the cognitive load associated with clinical tasks. Automaticity of EMR use contributed to efficiency, but sometimes resulted in errors, such as the selection of incorrect medication or the input of data into the wrong patient’s chart. EMR use interfered with patient–doctor communication. The main strategy for overcoming this problem involved separating EMR use from time spent communicating with patients. Computer mastery and enhanced physicians’ communication skills also helped. CONCLUSIONS There is a fine balance between the benefits and risks of EMR use. Automaticity, especially in combination with interruptions, emerged as the main cognitive factor contributing to errors. EMR use had a negative influence on communication, a problem that can be partially addressed by improving the spatial organization of physicians’ offices and by enhancing physicians’ computer and communication skills. PMID:19130148
Abdel Massih, M; Planchon, V; Polet, M; Dierick, K; Mahillon, J
2016-02-01
Based on the results of 19 food microbiology proficiency testing (PT) schemes, this study aimed to assess the laboratory performances, to highlight the main sources of unsatisfactory analytical results and to suggest areas of improvement. The 2009-2015 results of REQUASUD and IPH PT, involving a total of 48 laboratories, were analysed. On average, the laboratories failed to detect or enumerate foodborne pathogens in 3·0% of the tests. Thanks to a close collaboration with the PT participants, the causes of outliers could be identified in 74% of the cases. The main causes of erroneous PT results were either pre-analytical (handling of the samples, timing of analysis), analytical (unsuitable methods, confusion of samples, errors in colony counting or confirmation) or postanalytical mistakes (calculation and encoding of results). PT schemes are a privileged observation post to highlight analytical problems, which would otherwise remain unnoticed. In this perspective, this comprehensive study of PT results provides insight into the sources of systematic errors encountered during the analyses. This study draws the attention of the laboratories to the main causes of analytical errors and suggests practical solutions to avoid them, in an educational purpose. The observations support the hypothesis that regular participation to PT, when followed by feed-back and appropriate corrective actions, can play a key role in quality improvement and provide more confidence in the laboratory testing results. © 2015 The Society for Applied Microbiology.
Steady-state low thermal resistance characterization apparatus: The bulk thermal tester
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burg, Brian R.; Kolly, Manuel; Blasakis, Nicolas
The reliability of microelectronic devices is largely dependent on electronic packaging, which includes heat removal. The appropriate packaging design therefore necessitates precise knowledge of the relevant material properties, including thermal resistance and thermal conductivity. Thin materials and high conductivity layers make their thermal characterization challenging. A steady state measurement technique is presented and evaluated with the purpose to characterize samples with a thermal resistance below 100 mm{sup 2} K/W. It is based on the heat flow meter bar approach made up by two copper blocks and relies exclusively on temperature measurements from thermocouples. The importance of thermocouple calibration is emphasizedmore » in order to obtain accurate temperature readings. An in depth error analysis, based on Gaussian error propagation, is carried out. An error sensitivity analysis highlights the importance of the precise knowledge of the thermal interface materials required for the measurements. Reference measurements on Mo samples reveal a measurement uncertainty in the range of 5% and most accurate measurements are obtained at high heat fluxes. Measurement techniques for homogeneous bulk samples, layered materials, and protruding cavity samples are discussed. Ultimately, a comprehensive overview of a steady state thermal characterization technique is provided, evaluating the accuracy of sample measurements with thermal resistances well below state of the art setups. Accurate characterization of materials used in heat removal applications, such as electronic packaging, will enable more efficient designs and ultimately contribute to energy savings.« less
Dynamics of early planetary gear trains
NASA Technical Reports Server (NTRS)
August, R.; Kasuba, R.; Frater, J. L.; Pintz, A.
1984-01-01
A method to analyze the static and dynamic loads in a planetary gear train was developed. A variable-variable mesh stiffness (VVMS) model was used to simulate the external and internal spur gear mesh behavior, and an equivalent conventional gear train concept was adapted for the dynamic studies. The analysis can be applied either involute or noninvolute spur gearing. By utilizing the equivalent gear train concept, the developed method may be extended for use for all types of epicyclic gearing. The method is incorporated into a computer program so that the static and dynamic behavior of individual components can be examined. Items considered in the analysis are: (1) static and dynamic load sharing among the planets; (2) floating or fixed Sun gear; (3) actual tooth geometry, including errors and modifications; (4) positioning errors of the planet gears; (5) torque variations due to noninvolute gear action. A mathematical model comprised of power source, load, and planetary transmission is used to determine the instantaneous loads to which the components are subjected. It considers fluctuating output torque, elastic behavior in the system, and loss of contact between gear teeth. The dynamic model has nine degrees of freedom resulting in a set of simultaneous second order differential equations with time varying coefficients, which are solved numerically. The computer program was used to determine the effect of manufacturing errors, damping and component stiffness, and transmitted load on dynamic behavior. It is indicated that this methodology offers the designer/analyst a comprehensive tool with which planetary drives may be quickly and effectively evaluated.
Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter
2010-12-01
Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).
NASA Astrophysics Data System (ADS)
Zhang, Yu; Zhao, Jiyun; Wang, Peng; Skyllas-Kazacos, Maria; Xiong, Binyu; Badrinarayanan, Rajagopalan
2015-09-01
Electrical equivalent circuit models demonstrate excellent adaptability and simplicity in predicting the electrical dynamic response of the all-vanadium redox flow battery (VRB) system. However, only a few publications that focus on this topic are available. The paper presents a comprehensive equivalent circuit model of VRB for system level analysis. The least square method is used to identify both steady-state and dynamic characteristics of VRB. The inherent features of the flow battery such as shunt current, ion diffusion and pumping energy consumption are also considered. The proposed model consists of an open-circuit voltage source, two parasitic shunt bypass circuits, a 1st order resistor-capacitor network and a hydraulic circuit model. Validated with experimental data, the proposed model demonstrates excellent accuracy. The mean-error of terminal voltage and pump consumption are 0.09 V and 0.49 W respectively. Based on the proposed model, self-discharge and system efficiency are studied. An optimal flow rate which maximizes the system efficiency is identified. Finally, the dynamic responses of the proposed VRB model under step current profiles are presented. Variables such as SOC and stack terminal voltage can be provided.
Gender Agreement Attraction in Russian: Production and Comprehension Evidence
Slioussar, Natalia; Malko, Anton
2016-01-01
Agreement attraction errors (such as the number error in the example “The key to the cabinets are rusty”) have been the object of many studies in the last 20 years. So far, almost all production experiments and all comprehension experiments looked at binary features (primarily at number in Germanic, Romance, and some other languages, in several cases at gender in Romance languages). Among other things, it was noted that both in production and in comprehension, attraction effects are much stronger for some feature combinations than for the others: they can be observed in the sentences with singular heads and plural dependent nouns (e.g.,“The key to the cabinets…”), but not in the sentences with plural heads and singular dependent nouns (e.g., “The keys to the cabinet…”). Almost all proposed explanations of this asymmetry appeal to feature markedness, but existing findings do not allow teasing different approaches to markedness apart. We report the results of four experiments (one on production and three on comprehension) studying subject-verb gender agreement in Russian, a language with three genders. Firstly, we found attraction effects both in production and in comprehension, but, unlike in the case of number agreement, they were not parallel (in production, feminine gender triggered strongest effects, while neuter triggered weakest effects, while in comprehension, masculine triggered weakest effects). Secondly, in the comprehension experiments attraction was observed for all dependent noun genders, but only for a subset of head noun genders. This goes against the traditional assumption that the features of the dependent noun are crucial for attraction, showing the features of the head are more important. We demonstrate that this approach can be extended to previous findings on attraction and that there exists other evidence for it. In total, these findings let us reconsider the question which properties of features are crucial for agreement attraction in production and in comprehension. PMID:27867365
Gender Agreement Attraction in Russian: Production and Comprehension Evidence.
Slioussar, Natalia; Malko, Anton
2016-01-01
Agreement attraction errors (such as the number error in the example "The key to the cabinets are rusty") have been the object of many studies in the last 20 years. So far, almost all production experiments and all comprehension experiments looked at binary features (primarily at number in Germanic, Romance, and some other languages, in several cases at gender in Romance languages). Among other things, it was noted that both in production and in comprehension, attraction effects are much stronger for some feature combinations than for the others: they can be observed in the sentences with singular heads and plural dependent nouns (e.g.,"The key to the cabinets…"), but not in the sentences with plural heads and singular dependent nouns (e.g., "The keys to the cabinet…"). Almost all proposed explanations of this asymmetry appeal to feature markedness, but existing findings do not allow teasing different approaches to markedness apart. We report the results of four experiments (one on production and three on comprehension) studying subject-verb gender agreement in Russian, a language with three genders. Firstly, we found attraction effects both in production and in comprehension, but, unlike in the case of number agreement, they were not parallel (in production, feminine gender triggered strongest effects, while neuter triggered weakest effects, while in comprehension, masculine triggered weakest effects). Secondly, in the comprehension experiments attraction was observed for all dependent noun genders, but only for a subset of head noun genders. This goes against the traditional assumption that the features of the dependent noun are crucial for attraction, showing the features of the head are more important. We demonstrate that this approach can be extended to previous findings on attraction and that there exists other evidence for it. In total, these findings let us reconsider the question which properties of features are crucial for agreement attraction in production and in comprehension.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
A Comprehensive Revision of the Logistics Planning Exercise (Log-Plan-X).
1981-06-01
teaching objectives. The difference between conventional teaching methods and simulation rests in the fact that most conventional techniques focus on...Communication and Humanitie. AFIT/LSH, WPAFB OH 45433220 V&. MONITORING AGENCY NAME9 & ADORES(II different fron Ca.U.Ufind Office) is. SECURITY UNCLASSIFIED I...error systems in real life can be very costly. Simulations can be an efficient and effective alternative to such trial and error methods by allowing
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
FREIGHT CONTAINER LIFTING STANDARD
DOE Office of Scientific and Technical Information (OSTI.GOV)
POWERS DJ; SCOTT MA; MACKEY TC
2010-01-13
This standard details the correct methods of lifting and handling Series 1 freight containers following ISO-3874 and ISO-1496. The changes within RPP-40736 will allow better reading comprehension, as well as correcting editorial errors.
Parental language and dosing errors after discharge from the pediatric emergency department.
Samuels-Kalow, Margaret E; Stack, Anne M; Porter, Stephen C
2013-09-01
Safe and effective care after discharge requires parental education in the pediatric emergency department (ED). Parent-provider communication may be more difficult with parents who have limited health literacy or English-language fluency. This study examined the relationship between language and discharge comprehension regarding medication dosing. We completed a prospective observational study of the ED discharge process using a convenience sample of English- and Spanish-speaking parents of children 2 to 24 months presenting to a single tertiary care pediatric ED with fever and/or respiratory illness. A bilingual research assistant interviewed parents to ascertain their primary language and health literacy and observed the discharge process. The primary outcome was parental demonstration of an incorrect dose of acetaminophen for the weight of his or her child. A total of 259 parent-child dyads were screened. There were 210 potential discharges, and 145 (69%) of 210 completed the postdischarge interview. Forty-six parents (32%) had an acetaminophen dosing error. Spanish-speaking parents were significantly more likely to have a dosing error (odds ratio, 3.7; 95% confidence interval, 1.6-8.1), even after adjustment for language of discharge, income, and parental health literacy (adjusted odds ratio, 6.7; 95% confidence interval, 1.4-31.7). Current ED discharge communication results in a significant disparity between English- and Spanish-speaking parents' comprehension of a crucial aspect of medication safety. These differences were not explained purely by interpretation, suggesting that interventions to improve comprehension must address factors beyond language alone.
Sources of Error and the Statistical Formulation of M S: m b Seismic Event Screening Analysis
NASA Astrophysics Data System (ADS)
Anderson, D. N.; Patton, H. J.; Taylor, S. R.; Bonner, J. L.; Selby, N. D.
2014-03-01
The Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global ban on nuclear explosions, is currently in a ratification phase. Under the CTBT, an International Monitoring System (IMS) of seismic, hydroacoustic, infrasonic and radionuclide sensors is operational, and the data from the IMS is analysed by the International Data Centre (IDC). The IDC provides CTBT signatories basic seismic event parameters and a screening analysis indicating whether an event exhibits explosion characteristics (for example, shallow depth). An important component of the screening analysis is a statistical test of the null hypothesis H 0: explosion characteristics using empirical measurements of seismic energy (magnitudes). The established magnitude used for event size is the body-wave magnitude (denoted m b) computed from the initial segment of a seismic waveform. IDC screening analysis is applied to events with m b greater than 3.5. The Rayleigh wave magnitude (denoted M S) is a measure of later arriving surface wave energy. Magnitudes are measurements of seismic energy that include adjustments (physical correction model) for path and distance effects between event and station. Relative to m b, earthquakes generally have a larger M S magnitude than explosions. This article proposes a hypothesis test (screening analysis) using M S and m b that expressly accounts for physical correction model inadequacy in the standard error of the test statistic. With this hypothesis test formulation, the 2009 Democratic Peoples Republic of Korea announced nuclear weapon test fails to reject the null hypothesis H 0: explosion characteristics.
Comprehension deficits among older patients in a quick diagnostic unit
Hvidt, Lisa Nebelin; Hvidt, Kristian Nebelin; Madsen, Kim; Schmidt, Thomas A
2014-01-01
Background Higher prevalence of multiple illnesses and cognitive impairment among older patients pose a risk of comprehension difficulties, potentially leading to medication errors. Therefore, the objective of this study was to investigate comprehension of discharge instructions among older patients admitted to a Quick Diagnostic Unit (QDU). Methods One hundred and two patients discharged from the QDU answered a questionnaire covering understanding of their hospitalization and discharge plan. Patients’ ability to recall discharge instructions and awareness of comprehension deficits, ie, ability to identify the misconceived information, were evaluated by comparing the questionnaires with the discharge letters. The population was divided into an older group (age ≥65 years) and a younger group. Results The older group (n=40) was less able to recall correct medication instructions when compared to the younger group (54% versus 78%, respectively; P=0.02). In multiple logistic regression analysis, correct recall of medication instructions was 4.2 times higher for the younger group compared to the older group (odds ratio 4.2, 95% confidence interval 1.5–11.9, P=0.007) when adjusted for sex and education. The older patients were less aware of their own comprehension deficits, and in respect to medication instructions awareness decreased 6.1% for each additional year of age (odds ratio 0.939, 95% confidence interval 0.904–0.98, P=0.001) when adjusted for sex and education. Conclusion Older patients were less able to recall correct medication instructions and less aware of their comprehension deficits after discharge from a QDU. The findings of the present study emphasize the importance of thorough communication and follow-up when treating older patients. PMID:24790423
Incorporating ethics into your comprehensive organizational plan.
Oetjen, Dawn; Rotarius, Timothy
2005-01-01
Today's health care executives find their organizations facing internal and external environments that are behaving in chaotic and unpredictable ways. From inadequate staffing and an increase in clinical errors to outdated risk management procedures and increased competition for scare reimbursements, these health care managers find themselves making decisions without being fully informed of the ethical ramifications of these decisions. A 6-part Comprehensive Organizational Plan is presented that helps the health care decision maker better understand the key success factors for the organization. The Comprehensive Organizational Plan is an overall plan that is intended to protect and serve your organization. The 6 plans in the Comprehensive Organizational Plan cover the following areas: competition, facilities, finances, human resources, information management, and marketing. The comprehensive organizational plan includes an overlay of the ethical considerations for each part of the plan.
Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.
2004-07-01
This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.
The development of a public optometry system in Mozambique: a Cost Benefit Analysis.
Thompson, Stephen; Naidoo, Kovin; Harris, Geoff; Bilotto, Luigi; Ferrão, Jorge; Loughman, James
2014-09-23
The economic burden of uncorrected refractive error (URE) is thought to be high in Mozambique, largely as a consequence of the lack of resources and systems to tackle this largely avoidable problem. The Mozambique Eyecare Project (MEP) has established the first optometry training and human resource deployment initiative to address the burden of URE in Lusophone Africa. The nature of the MEP programme provides the opportunity to determine, using Cost Benefit Analysis (CBA), whether investing in the establishment and delivery of a comprehensive system for optometry human resource development and public sector deployment is economically justifiable for Lusophone Africa. A CBA methodology was applied across the period 2009-2049. Costs associated with establishing and operating a school of optometry, and a programme to address uncorrected refractive error, were included. Benefits were calculated using a human capital approach to valuing sight. Disability weightings from the Global Burden of Disease study were applied. Costs were subtracted from benefits to provide the net societal benefit, which was discounted to provide the net present value using a 3% discount rate. Using the most recently published disability weightings, the potential exists, through the correction of URE in 24.3 million potentially economically productive persons, to achieve a net present value societal benefit of up to $1.1 billion by 2049, at a Benefit-Cost ratio of 14:1. When CBA assumptions are varied as part of the sensitivity analysis, the results suggest the societal benefit could lie in the range of $649 million to $9.6 billion by 2049. This study demonstrates that a programme designed to address the burden of refractive error in Mozambique is economically justifiable in terms of the increased productivity that would result due to its implementation.
Causes of vision loss worldwide, 1990-2010: a systematic analysis.
Bourne, Rupert R A; Stevens, Gretchen A; White, Richard A; Smith, Jennifer L; Flaxman, Seth R; Price, Holly; Jonas, Jost B; Keeffe, Jill; Leasher, Janet; Naidoo, Kovin; Pesudovs, Konrad; Resnikoff, Serge; Taylor, Hugh R
2013-12-01
Data on causes of vision impairment and blindness are important for development of public health policies, but comprehensive analysis of change in prevalence over time is lacking. We did a systematic analysis of published and unpublished data on the causes of blindness (visual acuity in the better eye less than 3/60) and moderate and severe vision impairment ([MSVI] visual acuity in the better eye less than 6/18 but at least 3/60) from 1980 to 2012. We estimated the proportions of overall vision impairment attributable to cataract, glaucoma, macular degeneration, diabetic retinopathy, trachoma, and uncorrected refractive error in 1990-2010 by age, geographical region, and year. In 2010, 65% (95% uncertainty interval [UI] 61-68) of 32·4 million blind people and 76% (73-79) of 191 million people with MSVI worldwide had a preventable or treatable cause, compared with 68% (95% UI 65-70) of 31·8 million and 80% (78-83) of 172 million in 1990. Leading causes worldwide in 1990 and 2010 for blindness were cataract (39% and 33%, respectively), uncorrected refractive error (20% and 21%), and macular degeneration (5% and 7%), and for MSVI were uncorrected refractive error (51% and 53%), cataract (26% and 18%), and macular degeneration (2% and 3%). Causes of blindness varied substantially by region. Worldwide and in all regions more women than men were blind or had MSVI due to cataract and macular degeneration. The differences and temporal changes we found in causes of blindness and MSVI have implications for planning and resource allocation in eye care. Bill & Melinda Gates Foundation, Fight for Sight, Fred Hollows Foundation, and Brien Holden Vision Institute. Copyright © 2013 Bourne et al. Open Access article distributed under the terms of CC BY. Published by .. All rights reserved.
Antshel, Kevin M.; Hier, Bridget O.; Fremont, Wanda; Faraone, Stephen V.; Kates, Wendy R.
2015-01-01
Background The primary objective of the current study was to examine the childhood predictors of adolescent reading comprehension in velo-cardio-facial syndrome (VCFS). Although much research has focused on mathematics skills among individuals with VCFS, no studies have examined predictors of reading comprehension. Methods 69 late adolescents with VCFS , 23 siblings of youth with VCFS and 30 community controls participated in a longitudinal research project and had repeat neuropsychological test batteries and psychiatric evaluations every 3 years. The Wechsler Individual Achievement Test – 2nd edition (WIAT-II) Reading Comprehension subtest served as our primary outcome variable. Results Consistent with previous research, children and adolescents with VCFS had mean reading comprehension scores on the WIAT-II which were approximately two standard deviations below the mean and word reading scores approximately one standard deviation below the mean. A more novel finding is that relative to both control groups, individuals with VCFS demonstrated a longitudinal decline in reading comprehension abilities yet a slight increase in word reading abilities. In the combined control sample, WISC-III FSIQ, WIAT-II Word Reading, WISC-III Vocabulary and CVLT-C List A Trial 1 accounted for 75% of the variance in Time 3 WIAT-II Reading Comprehension scores. In the VCFS sample, WISC-III FSIQ, BASC-Teacher Aggression, CVLT-C Intrusions, Tower of London, Visual Span Backwards, WCST non-perseverative errors, WIAT-II Word Reading and WISC-III Freedom from Distractibility index accounted for 85% of the variance in Time 3 WIAT-II Reading Comprehension scores. A principal component analysis with promax rotation computed on the statistically significant Time 1 predictor variables in the VCFS sample resulted in three factors: Word reading decoding / Interference control, Self-Control / Self-Monitoring and Working Memory. Conclusions Childhood predictors of late adolescent reading comprehension in VCFS differ in some meaningful ways from predictors in the non-VCFS population. These results offer some guidance for how best to consider intervention efforts to improve reading comprehension in the VCFS population. PMID:24861691
NASA Astrophysics Data System (ADS)
Gerck, Ed
We present a new, comprehensive framework to qualitatively improve election outcome trustworthiness, where voting is modeled as an information transfer process. Although voting is deterministic (all ballots are counted), information is treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, are explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with a provably optimal design that is applicable to any type of voting, with or without ballots. Sixteen voting system requirements, including functional, performance, environmental and non-functional considerations, are derived and rated, meeting or exceeding current public-election requirements. The voter and the vote are unlinkable (secret ballot) although each is identifiable. The Witness-Voting System (Gerck, 2001) is extended as a conforming implementation of the provably optimal design that is error-free, transparent, simple, scalable, robust, receipt-free, universally-verifiable, 100% voter-verified, and end-to-end audited.
NASA Astrophysics Data System (ADS)
Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.
2017-06-01
In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K). Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.
Nesvizhskii, Alexey I.
2010-01-01
This manuscript provides a comprehensive review of the peptide and protein identification process using tandem mass spectrometry (MS/MS) data generated in shotgun proteomic experiments. The commonly used methods for assigning peptide sequences to MS/MS spectra are critically discussed and compared, from basic strategies to advanced multi-stage approaches. A particular attention is paid to the problem of false-positive identifications. Existing statistical approaches for assessing the significance of peptide to spectrum matches are surveyed, ranging from single-spectrum approaches such as expectation values to global error rate estimation procedures such as false discovery rates and posterior probabilities. The importance of using auxiliary discriminant information (mass accuracy, peptide separation coordinates, digestion properties, and etc.) is discussed, and advanced computational approaches for joint modeling of multiple sources of information are presented. This review also includes a detailed analysis of the issues affecting the interpretation of data at the protein level, including the amplification of error rates when going from peptide to protein level, and the ambiguities in inferring the identifies of sample proteins in the presence of shared peptides. Commonly used methods for computing protein-level confidence scores are discussed in detail. The review concludes with a discussion of several outstanding computational issues. PMID:20816881
Riga, Marina; Vozikis, Athanassios; Pollalis, Yannis; Souliotis, Kyriakos
2015-04-01
The economic crisis in Greece poses the necessity to resolve problems concerning both the spiralling cost and the quality assurance in the health system. The detection and the analysis of patient adverse events and medical errors are considered crucial elements of this course. The implementation of MERIS embodies a mandatory module, which adopts the trigger tool methodology for measuring adverse events and medical errors an intensive care unit [ICU] environment, and a voluntary one with web-based public reporting methodology. A pilot implementation of MERIS running in a public hospital identified 35 adverse events, with approx. 12 additional hospital days and an extra healthcare cost of €12,000 per adverse event or of about €312,000 per annum for ICU costs only. At the same time, the voluntary module unveiled 510 reports on adverse events submitted by citizens or patients. MERIS has been evaluated as a comprehensive and effective system; it succeeded in detecting the main factors that cause adverse events and discloses severe omissions of the Greek health system. MERIS may be incorporated and run efficiently nationally, adapted to the needs and peculiarities of each hospital or clinic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Using heuristic evaluations to assess the safety of health information systems.
Carvalho, Christopher J; Borycki, Elizabeth M; Kushniruk, Andre W
2009-01-01
Health information systems (HISs) are typically seen as a mechanism for reducing medical errors. There is, however, evidence to prove that technology may actually be the cause of errors. As a result, it is crucial to fully test any system prior to its implementation. At present, evidence-based evaluation heuristics do not exist for assessing aspects of interface design that lead to medical errors. A three phase study was conducted to develop evidence-based heuristics for evaluating interfaces. Phase 1 consisted of a systematic review of the literature. In Phase 2 a comprehensive list of 33 evaluation heuristics was developed based on the review that could be used to test for potential technology induced errors. Phase 3 involved applying these healthcare specific heuristics to evaluate a HIS.
Grid convergence errors in hemodynamic solution of patient-specific cerebral aneurysms.
Hodis, Simona; Uthamaraj, Susheil; Smith, Andrea L; Dennis, Kendall D; Kallmes, David F; Dragomir-Daescu, Dan
2012-11-15
Computational fluid dynamics (CFD) has become a cutting-edge tool for investigating hemodynamic dysfunctions in the body. It has the potential to help physicians quantify in more detail the phenomena difficult to capture with in vivo imaging techniques. CFD simulations in anatomically realistic geometries pose challenges in generating accurate solutions due to the grid distortion that may occur when the grid is aligned with complex geometries. In addition, results obtained with computational methods should be trusted only after the solution has been verified on multiple high-quality grids. The objective of this study was to present a comprehensive solution verification of the intra-aneurysmal flow results obtained on different morphologies of patient-specific cerebral aneurysms. We chose five patient-specific brain aneurysm models with different dome morphologies and estimated the grid convergence errors for each model. The grid convergence errors were estimated with respect to an extrapolated solution based on the Richardson extrapolation method, which accounts for the degree of grid refinement. For four of the five models, calculated velocity, pressure, and wall shear stress values at six different spatial locations converged monotonically, with maximum uncertainty magnitudes ranging from 12% to 16% on the finest grids. Due to the geometric complexity of the fifth model, the grid convergence errors showed oscillatory behavior; therefore, each patient-specific model required its own grid convergence study to establish the accuracy of the analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.
[Preserved ability to read aloud kanji idioms in left handed alexia].
Suzuki, Taemi; Suzuki, Kyoko; Iizuka, Osamu; Endo, Keiko; Yamadori, Atushi; Mori, Eturou
2004-08-01
We report a 69-year-old left-handed man, who developed alexia after a right medial occipito-temporal lobe infarction. On admission to the rehabilitation department two months after the onset, neurological examination showed left hemianopia, left hemiparesis, decreased deep sensation on the left side, and alexia. A brain MRI demonstrated infarcts in the right medial occipito-temporal lobe and the splenium of the corpus callosum. Detailed neuropsychological examination was performed two months after the onset. The patient was alert and cooperative. His speech was fluent with some word-finding difficulty. Comprehension for spoken materials, repetition, and naming abilities were all preserved. Systematic examination for reading revealed that reading aloud was disturbed in both kanji and kana words. Reading comprehension was significantly better for kanji words than kana words. First, we examined the effects of number of characters in a word. The number of characters in a word didn't affect his reading performance. Second, his performance on reading aloud of usual kanji words was compared with that of kanji words representing idioms. A kanji idiom is different from usual kanji words, in which pronunciation of each character is selected from several options. Reading aloud kanji idioms was significantly better than usual kanji words. In addition, reaction time to complete reading a word was much shorter for kanji idioms than usual kanji. An analysis of qualitative features of errors revealed that most errors in kanji idiom reading were semantically similar to the correct answers, while many errors in usual kanji word reading were classified into "don't know" responses. These findings suggested that a kanji idiom was tightly connected to its pronunciation, which resulted in his much better performance for kanji idiom reading. Overlearning of a unique relationship between a kanji idiom and its pronunciation might modify neuronal organization for reading.
Guidelines and recommendations for household and external travel surveys.
DOT National Transportation Integrated Search
2010-03-01
The Texas Department of Transportation has a comprehensive ongoing travel survey program. Research under RMC : 0-5711 examined areas within two select travel surveys concerning quality control issues involved in data collection : and sampling error i...
Improving accuracy in household and external travel surveys.
DOT National Transportation Integrated Search
2010-01-01
The Texas Department of Transportation has a comprehensive on-going travel survey program. This research examines areas within two select travel surveys concerning quality control issues involved in data collection and sampling error in the data caus...
Regional Brain Dysfunction Associated with Semantic Errors in Comprehension.
Shahid, Hinna; Sebastian, Rajani; Tippett, Donna C; Saxena, Sadhvi; Wright, Amy; Hanayik, Taylor; Breining, Bonnie; Bonilha, Leonardo; Fridriksson, Julius; Rorden, Chris; Hillis, Argye E
2018-02-01
Here we illustrate how investigation of individuals acutely after stroke, before structure/function reorganization through recovery or rehabilitation, can be helpful in answering questions about the role of specific brain regions in language functions. Although there is converging evidence from a variety of sources that the left posterior-superior temporal gyrus plays some role in spoken word comprehension, its precise role in this function has not been established. We hypothesized that this region is essential for distinguishing between semantically related words, because it is critical for linking the spoken word to the complete semantic representation. We tested this hypothesis in 127 individuals with 48 hours of acute ischemic stroke, before the opportunity for reorganization or recovery. We identified tissue dysfunction (acute infarct and/or hypoperfusion) in gray and white matter parcels of the left hemisphere, and we evaluated the association between rate of semantic errors in a word-picture verification tasks and extent of tissue dysfunction in each region. We found that after correcting for lesion volume and multiple comparisons, the rate of semantic errors correlated with the extent of tissue dysfunction in left posterior-superior temporal gyrus and retrolenticular white matter. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Nour-Eldein, Hebatallah
2016-01-01
With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.
Nour-Eldein, Hebatallah
2016-01-01
Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839
Land Surface Temperature Measurements form EOS MODIS Data
NASA Technical Reports Server (NTRS)
Wan, Zhengming
1996-01-01
We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.
Systematic reviews, systematic error and the acquisition of clinical knowledge
2010-01-01
Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172
Beanland, Vanessa; Sellbom, Martin; Johnson, Alexandria K
2014-11-01
Personality traits are meaningful predictors of many significant life outcomes, including mortality. Several studies have investigated the relationship between specific personality traits and driving behaviours, e.g., aggression and speeding, in an attempt to identify traits associated with elevated crash risk. These studies, while valuable, are limited in that they examine only a narrow range of personality constructs and thus do not necessarily reveal which traits in constellation best predict aberrant driving behaviours. The primary aim of this study was to use a comprehensive measure of personality to investigate which personality traits are most predictive of four types of aberrant driving behaviour (Aggressive Violations, Ordinary Violations, Errors, Lapses) as indicated by the Manchester Driver Behaviour Questionnaire (DBQ). We recruited 285 young adults (67% female) from a university in the southeastern US. They completed self-report questionnaires including the DBQ and the Personality Inventory for DSM-5, which indexes 5 broad personality domains (Antagonism, Detachment, Disinhibition, Negative Affectivity, Psychoticism) and 25 specific trait facets. Confirmatory factor analysis showed adequate evidence for the DBQ internal structure. Structural regression analyses revealed that the personality domains of Antagonism and Negative Affectivity best predicted both Aggressive Violations and Ordinary Violations, whereas the best predictors of both Errors and Lapses were Negative Affectivity, Disinhibition and to a lesser extent Antagonism. A more nuanced analysis of trait facets revealed that Hostility was the best predictor of Aggressive Violations; Risk-taking and Hostility of Ordinary Violations; Irresponsibility, Separation Insecurity and Attention Seeking of Errors; and Perseveration and Irresponsibility of Lapses. Copyright © 2014 Elsevier Ltd. All rights reserved.
Men, Hong; Fu, Songlin; Yang, Jialin; Cheng, Meiqi; Shi, Yan; Liu, Jingjing
2018-01-18
Paraffin odor intensity is an important quality indicator when a paraffin inspection is performed. Currently, paraffin odor level assessment is mainly dependent on an artificial sensory evaluation. In this paper, we developed a paraffin odor analysis system to classify and grade four kinds of paraffin samples. The original feature set was optimized using Principal Component Analysis (PCA) and Partial Least Squares (PLS). Support Vector Machine (SVM), Random Forest (RF), and Extreme Learning Machine (ELM) were applied to three different feature data sets for classification and level assessment of paraffin. For classification, the model based on SVM, with an accuracy rate of 100%, was superior to that based on RF, with an accuracy rate of 98.33-100%, and ELM, with an accuracy rate of 98.01-100%. For level assessment, the R² related to the training set was above 0.97 and the R² related to the test set was above 0.87. Through comprehensive comparison, the generalization of the model based on ELM was superior to those based on SVM and RF. The scoring errors for the three models were 0.0016-0.3494, lower than the error of 0.5-1.0 measured by industry standard experts, meaning these methods have a higher prediction accuracy for scoring paraffin level.
Natsopoulos, D; Kiosseoglou, G; Xeromeritou, A; Alevriadou, A
1998-09-01
Two hundred seventy children of school age, 135 of whom were left-handed and an equivalent number of whom were right-handed, have been examined in the present study using a test battery of nine language ability measures: Vocabulary, Similarities, Comprehension (WISC-R), Deductive Reasoning, Inductive Reasoning, Sentence Completion, Comprehension of Sentential Semantics, Comprehension of Syntax, and Text Processing. The data analysis has indicated that: (1) One-factor solution applies both to the right- and left-handed population according to Standard Error Scree Method (Zoski & Jurs, 1996) with regard to language ability measures. (2) Handedness discriminates between right-handers (superior) and left-handers (inferior) in language ability. (3) There have been subgroups of left-handed children who differ in language ability distribution compared with right-handed children according to Hierarchical Cluster Analysis. (4) Extreme versus mild bias to hand preference and hand skill do not differentiate performance subgroups neither within the left-handed nor within the right-handed main group. (5) Sex and familial sinistrality do not affect performance. The results are discussed in relation to (a) "human balanced polymorphism" theory advocated by Annett (mainly Annett, 1985, 1993a; Annett & Manning, 1989), (b) potential pathology (mainly Bishop, 1984, 1990a; Coren & Halpern, 1991; Satz, Orsini, Saslow & Henry, 1985) and "developmental instability" (Yeo, Gangestad & Daniel, 1993), and delay of left-hemisphere maturation in left-handed individuals (Geschwind & Galaburda, 1985a,b, 1987), by pointing out the strength and weaknesses of these theoretical approaches in accounting for the present data. Copyright 1998 Academic Press.
2017-01-01
Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
Najat, Dereen
2017-01-01
Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Accuracy and Efficiency of Orthoptists in Comprehensive Pediatric Eye Examinations.
Scheetz, Jane; Koklanis, Konstandina; Long, Maureen; Morris, Meg E
2016-01-01
To investigate the level of agreement between orthoptists and medical practitioners in the comprehensive eye examination of children seen in an orthoptist-led triage clinic. Patient records over a 6-month period were retrospectively reviewed. Those with a presenting complaint related to vision or ocular motility were triaged into the orthoptist-led clinic and included in the study. Patients who did not meet the triage protocol and those who were not assessed by a medical practitioner at a subsequent appointment were excluded from analysis. The clinical findings from the orthoptist and medical practitioner were collected and compared. In total, sixty-three patients were reviewed during the 6-month period and met the inclusion criteria. After the initial comprehensive eye examination with an orthoptist, thirty-two were discharged from hospital and thirty-one were asked to return for a review appointment with a medical practitioner. Agreement between the orthoptists and medical practitioners for the diagnosis of strabismus and/or amblyopia was 84.6% (κ = 0.649, P < 0.001). There was strong agreement between orthoptists and medical practitioners for refractive error of the right eye [τ (19) = 0.352, P = 0.729] and left eye [τ (19) = 1.785, P = 0.090]. Fundus examination comparisons between the orthoptists and medical practitioners showed very high agreement (95.7%). Orthoptists have the skills necessary to provide comprehensive care of children referred for ocular motility and/or vision related disorders. There was close agreement between orthoptists and medical practitioners when performing comprehensive eye examinations. © 2016 Board of regents of the University of Wisconsin System, American Orthoptic Journal, Volume 66, 2016, ISSN 0065-955X, E-ISSN 1553-4448.
Mass balance assessment using GPS
NASA Technical Reports Server (NTRS)
Hulbe, Christina L.
1993-01-01
Mass balance is an integral part of any comprehensive glaciological investigation. Unfortunately, it is hard to determine at remote locations where there is no fixed reference. The Global Positioning System (GPS) offers a solution. Simultaneous GPS observations at a known location and the remote field site, processed differentially, will accurately position the camp site. From there, a monument planted in the firn atop the ice can also be accurately positioned. Change in the monument's vertical position is a direct indicator of ice thickness change. Because the monument is not connected to the ice, its motion is due to both mass balance change and to the settling of firn as it densifies into ice. Observations of relative position change between the monument and anchors at various depths within the firn are used to remove the settling effect. An experiment to test this method has begun at Byrd Station on the West Antarctic Ice Sheet and the first epoch of observations was made. Analysis indicates that positioning errors will be very small. It appears likely that the largest errors involved with this technique will arise from ancillary data needed to determine firn settling.
Chapman, Wendy W.; Dowling, John N.
2006-01-01
Evaluating automated indexing applications requires comparing automatically indexed terms against manual reference standard annotations. However, there are no standard guidelines for determining which words from a textual document to include in manual annotations, and the vague task can result in substantial variation among manual indexers. We applied grounded theory to emergency department reports to create an annotation schema representing syntactic and semantic variables that could be annotated when indexing clinical conditions. We describe the annotation schema, which includes variables representing medical concepts (e.g., symptom, demographics), linguistic form (e.g., noun, adjective), and modifier types (e.g., anatomic location, severity). We measured the schema’s quality and found: (1) the schema was comprehensive enough to be applied to 20 unseen reports without changes to the schema; (2) agreement between author annotators applying the schema was high, with an F measure of 93%; and (3) an error analysis showed that the authors made complementary errors when applying the schema, demonstrating that the schema incorporates both linguistic and medical expertise. PMID:16230050
Spectroscopy Made Easy: Evolution
NASA Astrophysics Data System (ADS)
Piskunov, Nikolai; Valenti, Jeff A.
2017-01-01
Context. The Spectroscopy Made Easy (SME) package has become a popular tool for analyzing stellar spectra, often in connection with large surveys or exoplanet research. SME has evolved significantly since it was first described in 1996, but many of the original caveats and potholes still haunt users. The main drivers for this paper are complexity of the modeling task, the large user community, and the massive effort that has gone into SME. Aims: We do not intend to give a comprehensive introduction to stellar atmospheres, but will describe changes to key components of SME: the equation of state, opacities, and radiative transfer. We will describe the analysis and fitting procedure and investigate various error sources that affect inferred parameters. Methods: We review the current status of SME, emphasizing new algorithms and methods. We describe some best practices for using the package, based on lessons learned over two decades of SME usage. We present a new way to assess uncertainties in derived stellar parameters. Results: Improvements made to SME, better line data, and new model atmospheres yield more realistic stellar spectra, but in many cases systematic errors still dominate over measurement uncertainty. Future enhancements are outlined.
Silvetti, Massimo; Alexander, William; Verguts, Tom; Brown, Joshua W
2014-10-01
The role of the medial prefrontal cortex (mPFC) and especially the anterior cingulate cortex has been the subject of intense debate for the last decade. A number of theories have been proposed to account for its function. Broadly speaking, some emphasize cognitive control, whereas others emphasize value processing; specific theories concern reward processing, conflict detection, error monitoring, and volatility detection, among others. Here we survey and evaluate them relative to experimental results from neurophysiological, anatomical, and cognitive studies. We argue for a new conceptualization of mPFC, arising from recent computational modeling work. Based on reinforcement learning theory, these new models propose that mPFC is an Actor-Critic system. This system is aimed to predict future events including rewards, to evaluate errors in those predictions, and finally, to implement optimal skeletal-motor and visceromotor commands to obtain reward. This framework provides a comprehensive account of mPFC function, accounting for and predicting empirical results across different levels of analysis, including monkey neurophysiology, human ERP, human neuroimaging, and human behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana
2011-01-01
Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777
Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing
2017-09-05
Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.
Cain, Kate
2006-07-01
Three experiments compared the verbal memory skills of children with poor reading comprehension with that of same-age good comprehenders. The aims were to determine if semantic and/or inhibitory deficits explained comprehenders' problems on measures of verbal short-term memory and verbal working memory. In Experiment 1 there were no group differences on word- and number-based measures of short-term storage and no evidence that semantic knowledge mediated word recall. In Experiment 2 poor comprehenders were impaired on word- and number-based assessments of working memory, the greatest deficit found on the word-based task. Error analysis of both word-based tasks revealed that poor comprehenders were more likely to recall items that should have been inhibited than were good comprehenders. Experiment 3 extended this finding: Poor comprehenders were less able to inhibit information that was no longer relevant. Together, these findings suggest that individual differences in inhibitory processing influence the ability to regulate the contents of working memory, which may contribute to the differential memory performance of good and poor comprehenders.
Neuropsychological correlates of sustained attention in schizophrenia.
Chen, E Y; Lam, L C; Chen, R Y; Nguyen, D G; Chan, C K; Wilkins, A J
1997-04-11
We employed a simple and relatively undemanding task of monotone counting for the assessment of sustained attention in schizophrenic patients. The monotone counting task has been validated neuropsychologically and is particularly sensitive to right prefrontal lesions. We compared the performance of schizophrenic patients with age- and education-matched controls. We then explored the extent to which a range of commonly employed neuropsychological tasks in schizophrenia research are related to attentional impairment as measured in this way. Monotone counting performance was found to be correlated with digit span (WAIS-R-HK), information (WAIS-R-HK), comprehension (WAIS-R-HK), logical memory (immediate recall) (Weschler Memory Scale, WMS), and visual reproduction (WMS). Multiple regression analysis also identified visual reproduction, digit span and comprehension as significant predictors of attention performance. In contrast, logical memory (delay recall) (WMS), similarity (WAIS-R-HK), semantic fluency, and Wisconsin Card Sorting Test (perseverative errors) were not correlated with attention. In addition, no significant correlation between sustained attention and symptoms was found. These findings are discussed in the context of a weakly modular cognitive system where attentional impairment may contribute selectively to a range of other cognitive deficits.
Zook, Justin M.; Samarov, Daniel; McDaniel, Jennifer; Sen, Shurjo K.; Salit, Marc
2012-01-01
While the importance of random sequencing errors decreases at higher DNA or RNA sequencing depths, systematic sequencing errors (SSEs) dominate at high sequencing depths and can be difficult to distinguish from biological variants. These SSEs can cause base quality scores to underestimate the probability of error at certain genomic positions, resulting in false positive variant calls, particularly in mixtures such as samples with RNA editing, tumors, circulating tumor cells, bacteria, mitochondrial heteroplasmy, or pooled DNA. Most algorithms proposed for correction of SSEs require a data set used to calculate association of SSEs with various features in the reads and sequence context. This data set is typically either from a part of the data set being “recalibrated” (Genome Analysis ToolKit, or GATK) or from a separate data set with special characteristics (SysCall). Here, we combine the advantages of these approaches by adding synthetic RNA spike-in standards to human RNA, and use GATK to recalibrate base quality scores with reads mapped to the spike-in standards. Compared to conventional GATK recalibration that uses reads mapped to the genome, spike-ins improve the accuracy of Illumina base quality scores by a mean of 5 Phred-scaled quality score units, and by as much as 13 units at CpG sites. In addition, since the spike-in data used for recalibration are independent of the genome being sequenced, our method allows run-specific recalibration even for the many species without a comprehensive and accurate SNP database. We also use GATK with the spike-in standards to demonstrate that the Illumina RNA sequencing runs overestimate quality scores for AC, CC, GC, GG, and TC dinucleotides, while SOLiD has less dinucleotide SSEs but more SSEs for certain cycles. We conclude that using these DNA and RNA spike-in standards with GATK improves base quality score recalibration. PMID:22859977
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-07-17
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-01-01
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283
Altered Error-Related Activity in Patients with Schizophrenia
ERIC Educational Resources Information Center
Koch, Kathrin; Wagner, Gerd; Schultz, Christoph; Schachtzabel, Claudia; Nenadic, Igor; Axer, Martina; Reichenbach, Jurgen R.; Sauer, Heinrich; Schlosser, Ralf G. M.
2009-01-01
Deficits in working memory (WM) and executive cognitive control are core features of schizophrenia. However, findings regarding functional activation strengths are heterogeneous, partly due to differences in task demands and behavioral performance. Previous investigators proposed integrating these heterogeneous findings into a comprehensive model…
42 CFR § 510.310 - Appeals process.
Code of Federal Regulations, 2010 CFR
2016-10-01
... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS COMPREHENSIVE CARE FOR JOINT REPLACEMENT MODEL Pricing and Payment § 510.310 Appeals process. (a) Notice of calculation error (first level of appeal... dispute the calculation that involves a matter related to payment, reconciliation amounts, repayment...
42 CFR § 510.310 - Appeals process.
Code of Federal Regulations, 2010 CFR
2017-10-01
... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS COMPREHENSIVE CARE FOR JOINT REPLACEMENT MODEL Pricing and Payment § 510.310 Appeals process. (a) Notice of calculation error (first level of appeal... dispute calculations involving a matter related to payment, reconciliation amounts, repayment amounts, the...
Measuring Reading Performance Informally.
ERIC Educational Resources Information Center
Powell, William R.
To improve the accuracy of the informal reading inventory (IRI), a differential set of criteria is necessary for both word recognition and comprehension scores for different levels and reading conditions. In initial evaluation, word recognition scores should reflect only errors of insertions, omissions, mispronunciations, substitiutions, unkown…
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
Isn’t it ironic? Neural Correlates of Irony Comprehension in Schizophrenia
Rapp, Alexander M.; Langohr, Karin; Mutschler, Dorothee E.; Klingberg, Stefan; Wild, Barbara; Erb, Michael
2013-01-01
Ironic remarks are frequent in everyday language and represent an important form of social cognition. Increasing evidence indicates a deficit in comprehension in schizophrenia. Several models for defective comprehension have been proposed, including possible roles of the medial prefrontal lobe, default mode network, inferior frontal gyri, mirror neurons, right cerebral hemisphere and a possible mediating role of schizotypal personality traits. We investigated the neural correlates of irony comprehension in schizophrenia by using event-related functional magnetic resonance imaging (fMRI). In a prosody-free reading paradigm, 15 female patients with schizophrenia and 15 healthy female controls silently read ironic and literal text vignettes during fMRI. Each text vignette ended in either an ironic (n = 22) or literal (n = 22) statement. Ironic and literal text vignettes were matched for word frequency, length, grammatical complexity, and syntax. After fMRI, the subjects performed an off-line test to detect error rate. In this test, the subjects indicated by button press whether the target sentence has ironic, literal, or meaningless content. Schizotypal personality traits were assessed using the German version of the schizotypal personality questionnaire (SPQ). Patients with schizophrenia made significantly more errors than did the controls (correct answers, 85.3% vs. 96.3%) on a behavioural level. Patients showed attenuated blood oxygen level-dependent (BOLD) response during irony comprehension mainly in right hemisphere temporal regions (ironic>literal contrast) and in posterior medial prefrontal and left anterior insula regions (for ironic>visual baseline, but not for literal>visual baseline). In patients with schizophrenia, the parahippocampal gyrus showed increased activation. Across all subjects, BOLD response in the medial prefrontal area was negatively correlated with the SPQ score. These results highlight the role of the posterior medial prefrontal and right temporal regions in defective irony comprehension in schizophrenia and the mediating role of schizotypal personality traits. PMID:24040207
Han, Doo Hee; Won, Tae-Bin; Kim, Dong-Young; Kim, Jeong-Whun
2014-01-01
Background It has been well known that pediatric allergic rhinitis was associated with poor performance at school due to attention deficit. However, there were no cohort studies for the effect of treatment of allergic rhinitis on attention performance in pediatric population. Thus, the aim of this study was to investigate whether attention performance was improved after treatment in children with allergic rhinitis. Methods In this ARCO-Kids (Allergic Rhinitis Cohort Study for Kids), consecutive pediatric patients with rhinitis symptoms underwent a skin prick test and computerized comprehensive attention test. According to the skin prick test results, the children were diagnosed as allergic rhinitis or non- allergic rhinitis. All of the patients were regularly followed up and treated with oral medication or intranasal corticosteroid sprays. The comprehensive attention tests consisted of sustained and divided attention tasks. Each of the tasks was assessed by the attention score which was calculated by the number of omission and commission errors. The comprehension attention test was repeated after 1 year. Results A total of 797 children with allergic rhinitis and 239 children with non-allergic rhinitis were included. Initially, the attention scores of omission and commission errors on divided attention task were significantly lower in children with allergic rhinitis than in children with non-allergic rhinitis. After 1 year of treatment, children with allergic rhinitis showed improvement in attention: commission error of sustained (95.6±17.0 vs 97.0±16.6) and divided attention task (99.1±15.8 vs 91.8±23.5). Meanwhile, there was no significant difference of attention scores in children with non-allergic rhinitis. Conclusions Our study showed that management of allergic rhinitis might be associated with improvement of attention. PMID:25330316
NASA Astrophysics Data System (ADS)
Kuzucu, H.
1992-11-01
Modern defense systems depend on comprehensive surveillance capability. The ability to detect and locate the radio signals is a major element of a surveillance system. With the increasing need for more mobile surveillance systems in conjunction with the rapid deployment of forces and the advent of technology allowing more enhanced use of small aperture systems, tactical direction finding (DF) and radiolocation systems will have to be operated in diverse operational conditions. A quick assessment of the error levels expected and the evaluation of the reliability of the fixes on the targeted areas bears crucial importance to the effectiveness of the missions relying on DF data. This paper presents a sophisticated, graphics workstation based computer tool developed for the system level analysis of radio communication systems and describes its use in radiolocation applications for realizing such accurate and realistic assessments with substantial money and time savings.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
[Re-operations in patients with heart wounds].
Radchenko, Yu A; Abakumov, M M; Vladimirova, E S; Pogodina, A N; Nikitina, O V
To define the risk factors of complications which are followed by re-operations in patients with cardiac and pericardial wounds and to prevent these complications. Retrospective and prospective analysis of 1072 victims with cardiac and pericardial injuries for 35 years was performed. Overall mortality was 17.2%. 98 patients died during surgery. Postoperative bleeding was observed in 38 (3.9%) cases. In 28 cases re-operations were performed for bleeding-related complications. Indications for re-thoracotomy were one-time drainage from pleural cavity over 500 ml or bleeding rate over 100 ml per hour for 4 hours. Prevention of postoperative bleeding in case of cardiac and pericardial wounds was developed on basis of analysis of these observations. Risk factors of complications requiring re-operation are cardiomyopathy of different etiology, technical and tactical errors during primary intervention and hypocoagulation with massive blood loss. Prevention of these complications includes careful heart wound closure, comprehensive intraoperative control, correction of hemostatic system.
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William
2017-09-01
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.
Toledo Piza, Carolina M. J.; de Macedo, Elizeu C.; Miranda, Monica C.; Bueno, Orlando F. A.
2014-01-01
The analysis of cognitive processes underpinning reading and writing skills may help to distinguish different reading ability profiles. The present study used a Brazilian reading and writing battery to compare performance of students with dyslexia with two individually matched control groups: one contrasting on reading competence but not age and the other group contrasting on age but not reading competence. Participants were 28 individuals with dyslexia (19 boys) with a mean age of 9.82 (SD ± 1.44) drawn from public and private schools. These were matched to: (1) an age control group (AC) of 26 good readers with a mean age of 9.77 (SD ± 1.44) matched by age, sex, years of schooling, and type of school; (2) reading control group (RC) of 28 younger controls with a mean age of 7.82 (SD ± 1.06) matched by sex, type of school, and reading level. All groups were tested on four tasks from the Brazilian Reading and Writing Assessment battery (“BALE”): Written Sentence Comprehension Test (WSCT); Spoken Sentence Comprehension Test (OSCT); Picture-Print Writing Test (PPWT 1.1-Writing); and the Reading Competence Test (RCT). These tasks evaluate reading and listening comprehension for sentences, spelling, and reading isolated words and pseudowords (non-words). The dyslexia group scored lower and took longer to complete tasks than the AC group. Compared with the RC group, there were no differences in total scores on reading or oral comprehension tasks. However, dyslexics presented slower reading speeds, longer completion times, and lower scores on spelling tasks, even compared with younger controls. Analysis of types of errors on word and pseudoword reading items showed students with dyslexia scoring lower for pseudoword reading than the other two groups. These findings suggest that the dyslexics overall scores were similar to those of younger readers. However, specific phonological and visual decoding deficits showed that the two groups differ in terms of underpinning reading strategies. PMID:25132829
Kiefl, Johannes; Cordero, Chiara; Nicolotti, Luca; Schieberle, Peter; Reichenbach, Stephen E; Bicchi, Carlo
2012-06-22
The continuous interest in non-targeted profiling induced the development of tools for automated cross-sample analysis. Such tools were found to be selective or not comprehensive thus delivering a biased view on the qualitative/quantitative peak distribution across 2D sample chromatograms. Therefore, the performance of non-targeted approaches needs to be critically evaluated. This study focused on the development of a validation procedure for non-targeted, peak-based, GC×GC-MS data profiling. The procedure introduced performance parameters such as specificity, precision, accuracy, and uncertainty for a profiling method known as Comprehensive Template Matching. The performance was assessed by applying a three-week validation protocol based on CITAC/EURACHEM guidelines. Optimized ¹D and ²D retention times search windows, MS match factor threshold, detection threshold, and template threshold were evolved from two training sets by a semi-automated learning process. The effectiveness of proposed settings to consistently match 2D peak patterns was established by evaluating the rate of mismatched peaks and was expressed in terms of results accuracy. The study utilized 23 different 2D peak patterns providing the chemical fingerprints of raw and roasted hazelnuts (Corylus avellana L.) from different geographical origins, of diverse varieties and different roasting degrees. The validation results show that non-targeted peak-based profiling can be reliable with error rates lower than 10% independent of the degree of analytical variance. The optimized Comprehensive Template Matching procedure was employed to study hazelnut roasting profiles and in particular to find marker compounds strongly dependent on the thermal treatment, and to establish the correlation of potential marker compounds to geographical origin and variety/cultivar and finally to reveal the characteristic release of aroma active compounds. Copyright © 2012 Elsevier B.V. All rights reserved.
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Coupling detrended fluctuation analysis of Asian stock markets
NASA Astrophysics Data System (ADS)
Wang, Qizhen; Zhu, Yingming; Yang, Liansheng; Mul, Remco A. H.
2017-04-01
This paper uses the coupling detrended fluctuation analysis (CDFA) method to investigate the multifractal characteristics of four Asian stock markets using three stock indices: stock price returns, trading volumes and the composite index. The results show that coupled correlations exist among the four stock markets and the coupled correlations have multifractal characteristics. We then use the chi square (χ2) test to identify the sources of multifractality. For the different stock indices, the contributions of a single series to multifractality are different. In other words, the contributions of each country to coupled correlations are different. The comparative analysis shows that the research on the combine effect of stock price returns and trading volumes may be more comprehensive than on an individual index. By comparing the strength of multifractality for original data with the residual errors of the vector autoregression (VAR) model, we find that the VAR model could not be used to describe the dynamics of the coupled correlations among four financial time series.
R classes and methods for SNP array data.
Scharpf, Robert B; Ruczinski, Ingo
2010-01-01
The Bioconductor project is an "open source and open development software project for the analysis and comprehension of genomic data" (1), primarily based on the R programming language. Infrastructure packages, such as Biobase, are maintained by Bioconductor core developers and serve several key roles to the broader community of Bioconductor software developers and users. In particular, Biobase introduces an S4 class, the eSet, for high-dimensional assay data. Encapsulating the assay data as well as meta-data on the samples, features, and experiment in the eSet class definition ensures propagation of the relevant sample and feature meta-data throughout an analysis. Extending the eSet class promotes code reuse through inheritance as well as interoperability with other R packages and is less error-prone. Recently proposed class definitions for high-throughput SNP arrays extend the eSet class. This chapter highlights the advantages of adopting and extending Biobase class definitions through a working example of one implementation of classes for the analysis of high-throughput SNP arrays.
Narrative comprehension and production in children with SLI: An eye movement study
ANDREU, LLORENÇ; SANZ-TORRENT, MONICA; OLMOS, JOAN GUÀRDIA; MACWHINNEY, BRIAN
2014-01-01
This study investigates narrative comprehension and production in children with specific language impairment (SLI). Twelve children with SLI (mean age 5; 8 years) and 12 typically developing children (mean age 5; 6 years) participated in an eye-tracking experiment designed to investigate online narrative comprehension and production in Catalan- and Spanish-speaking children with SLI. The comprehension task involved the recording of eye movements during the visual exploration of successive scenes in a story, while listening to the associated narrative. With regard to production, the children were asked to retell the story, while once again looking at the scenes, as their eye movements were monitored. During narrative production, children with SLI look at the most semantically relevant areas of the scenes fewer times than their age-matched controls, but no differences were found in narrative comprehension. Moreover, the analyses of speech productions revealed that children with SLI retained less information and made more semantic and syntactic errors during retelling. Implications for theories that characterize SLI are discussed. PMID:21453036
The effects of age on symbol comprehension in central rail hubs in Taiwan.
Liu, Yung-Ching; Ho, Chin-Heng
2012-11-01
The purpose of this study was to investigate the effects of age and symbol design features on passengers' comprehension of symbols and the performance of these symbols with regard to route guidance. In the first experiment, 30 young participants and 30 elderly participants interpreted the meanings and rated the features of 39 symbols. Researchers collected data on each subject's comprehension time, comprehension score, and feature ratings for each symbol. In the second experiment, this study used a series of photos to simulate scenarios in which passengers follow symbols to arrive at their destinations. The length of time each participant required to follow his/her route and his/her errors were recorded. Older adults experienced greater difficulty in understanding particular symbols as compared to younger adults. Familiarity was the feature most highly correlated with comprehension of symbols and accuracy of semantic depiction was the best predictor of behavior in following routes. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Solving subsurface structural problems using a computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, D.M.
1987-02-01
Until recently, the solution of subsurface structural problems has required a combination of graphical construction, trigonometry, time, and patience. Recent advances in software available for both mainframe and microcomputers now reduce the time and potential error of these calculations by an order of magnitude. Software for analysis of deviated wells, three point problems, apparent dip, apparent thickness, and the intersection of two planes, as well as the plotting and interpretation of these data can be used to allow timely and accurate exploration or operational decisions. The available computer software provides a set of utilities, or tools, rather than a comprehensive,more » intelligent system. The burden for selection of appropriate techniques, computation methods, and interpretations still lies with the explorationist user.« less
Quantifying Error in Survey Measures of School and Classroom Environments
ERIC Educational Resources Information Center
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
A European Navy: Can it Complete European Political and Economic Integration?
2012-06-01
would enjoy a comprehensive and complete defense structure, perhaps even more effective and cost-efficient than is possible with the current lineup ...malfunctions or operator error. A unified and unifying European Navy would standardize basic equipment and procedures , facilitating effective exercises
A Model of the Acoustic Interactions Occurring Under Arctic Ice
1990-05-22
agreement at angles near ecrit - Finally there is undoubtedly some error in the collected data as any temperature variations were not accounted for...acoustic attenuation in various media will supplement the overall comprehension of reflection and transmission phenomena as well. Continued collection of
Software for marine ecological environment comprehensive monitoring system based on MCGS
NASA Astrophysics Data System (ADS)
Wang, X. H.; Ma, R.; Cao, X.; Cao, L.; Chu, D. Z.; Zhang, L.; Zhang, T. P.
2017-08-01
The automatic integrated monitoring software for marine ecological environment based on MCGS configuration software is designed and developed to realize real-time automatic monitoring of many marine ecological parameters. The DTU data transmission terminal performs network communication and transmits the data to the user data center in a timely manner. The software adopts the modular design and has the advantages of stable and flexible data structure, strong portability and scalability, clear interface, simple user operation and convenient maintenance. Continuous site comparison test of 6 months showed that, the relative error of the parameters monitored by the system such as temperature, salinity, turbidity, pH, dissolved oxygen was controlled within 5% with the standard method and the relative error of the nutrient parameters was within 15%. Meanwhile, the system had few maintenance times, low failure rate, stable and efficient continuous monitoring capabilities. The field application shows that the software is stable and the data communication is reliable, and it has a good application prospect in the field of marine ecological environment comprehensive monitoring.
Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software
NASA Astrophysics Data System (ADS)
Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg
2017-09-01
100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.
Aphasia in a prelingually deaf woman.
Chiarello, C; Knight, R; Mandel, M
1982-03-01
A left parietal infarct in a prelingually deaf person resulted in an aphasia for both American Sign Language (ASL) and written and finger-spelled English. Originally the patient had a nearly global aphasia affecting all language systems. By five to seven weeks post-onset her symptoms resembled those of hearing aphasics with posterior lesions: fluent but paraphasic signing, anomia, impaired comprehension and repetition, alexia, and agraphia with elements of neologistic jargon. In addition, there was a pronounced sequential movement copying disorder, reduced short-term verbal memory and acalculia. In general, the patient's sign errors showed a consistent disruption in the structure of ASL signs which parallels the speech errors of oral aphasic patients. We conclude that most aphasic symptoms are not modality-dependent, but rather reflect a disruption of linguistic processes common to all human languages. This case confirms the importance of the left hemisphere in the processing of sign language. Furthermore, the results indicate that the left supramarginal and angular gyri are necessary substrates for the comprehension of visual/gestural languages.
Jackson, Jonathan D.; Balota, David A.
2011-01-01
One mechanism that has been hypothesized to contribute to older adults’ changes in cognitive performance is goal neglect or impairment in maintaining task set across time. Mind-wandering and task-unrelated thought may underlie these potential age-related changes. The present study investigated age-related changes in mind-wandering in three different versions of the Sustained Attention to Response task (SART), along with self-reported mind-wandering during a reading for comprehension task. In the SART, both younger and older adults produced similar levels of faster reaction times before No-Go errors of commission, whereas, older adults produced disproportionate post-error slowing. Subjective self-reports of mind-wandering recorded during the SART and the reading task indicated that older adults were less likely to report mind-wandering than younger adults. Discussion focuses on cognitive and motivational mechanisms that may account for older adults’ relatively low levels of reported mind-wandering. PMID:21707183
Hsu, Nina S.; Novick, Jared M.
2016-01-01
Speech unfolds swiftly, yet listeners keep pace by rapidly assigning meaning to what they hear. Sometimes though, initial interpretations turn out wrong. How do listeners revise misinterpretations of language input moment-by-moment, to avoid comprehension errors? Cognitive control may play a role by detecting when processing has gone awry, and then initiating behavioral adjustments accordingly. However, no research has investigated a cause-and-effect interplay between cognitive control engagement and overriding erroneous interpretations in real-time. Using a novel cross-task paradigm, we show that Stroop-conflict detection, which mobilizes cognitive control procedures, subsequently facilitates listeners’ incremental processing of temporarily ambiguous spoken instructions that induce brief misinterpretation. When instructions followed Stroop-incongruent versus-congruent items, listeners’ eye-movements to objects in a scene reflected more transient consideration of the false interpretation and earlier recovery of the correct one. Comprehension errors also decreased. Cognitive control engagement therefore accelerates sentence re-interpretation processes, even as linguistic input is still unfolding. PMID:26957521
Jackson, Jonathan D; Balota, David A
2012-03-01
One mechanism that has been hypothesized to contribute to older adults' changes in cognitive performance is goal neglect or impairment in maintaining task set across time. Mind-wandering and task-unrelated thought may underlie these potential age-related changes. The present study investigated age-related changes in mind-wandering in three different versions of the Sustained Attention to Response Task (SART), along with self-reported mind-wandering during a reading for comprehension task. In the SART, both younger and older adults produced similar levels of faster reaction times before No-Go errors of commission, whereas, older adults produced disproportionate post-error slowing. Subjective self-reports of mind-wandering recorded during the SART and the reading task indicated that older adults were less likely to report mind-wandering than younger adults. Discussion focuses on cognitive and motivational mechanisms that may account for older adults' relatively low levels of reported mind-wandering.
Solving Upwind-Biased Discretizations: Defect-Correction Iterations
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
1999-01-01
This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.
Wack, Katy; Drogowski, Laura; Treloar, Murray; Evans, Andrew; Ho, Jonhan; Parwani, Anil; Montalto, Michael C
2016-01-01
Text-based reporting and manual arbitration for whole slide imaging (WSI) validation studies are labor intensive and do not allow for consistent, scalable, and repeatable data collection or analysis. The objective of this study was to establish a method of data capture and analysis using standardized codified checklists and predetermined synoptic discordance tables and to use these methods in a pilot multisite validation study. Fifteen case report form checklists were generated from the College of American Pathology cancer protocols. Prior to data collection, all hypothetical pairwise comparisons were generated, and a level of harm was determined for each possible discordance. Four sites with four pathologists each generated 264 independent reads of 33 cases. Preestablished discordance tables were applied to determine site by site and pooled accuracy, intrareader/intramodality, and interreader intramodality error rates. Over 10,000 hypothetical pairwise comparisons were evaluated and assigned harm in discordance tables. The average difference in error rates between WSI and glass, as compared to ground truth, was 0.75% with a lower bound of 3.23% (95% confidence interval). Major discordances occurred on challenging cases, regardless of modality. The average inter-reader agreement across sites for glass was 76.5% (weighted kappa of 0.68) and for digital it was 79.1% (weighted kappa of 0.72). These results demonstrate the feasibility and utility of employing standardized synoptic checklists and predetermined discordance tables to gather consistent, comprehensive diagnostic data for WSI validation studies. This method of data capture and analysis can be applied in large-scale multisite WSI validations.
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
Oral cancer screening: knowledge is not enough.
Tax, C L; Haslam, S Kim; Brillant, Mgs; Doucette, H J; Cameron, J E; Wade, S E
2017-08-01
The purpose of this cross-sectional study was to investigate whether dental hygienists are transferring their knowledge of oral cancer screening into practice. This study also wanted to gain insight into the barriers that might prevent dental hygienists from performing these screenings. A 27-item survey instrument was constructed to study the oral cancer screening practices of licensed dental hygienists in Nova Scotia. A total of 623 practicing dental hygienists received the survey. The response rate was 34% (n = 212) yielding a maximum margin of error of 5.47 at a 95% confidence level. Descriptive statistics were calculated using IBM SPSS Statistics v21 software (Armonk, NY:IBM Corp). Qualitative thematic analysis was performed on any open-ended responses. This study revealed that while dental hygienists perceived themselves as being knowledgeable about oral cancer screening, they were not transferring this knowledge to actual practice. Only a small percentage (13%) of respondents were performing a comprehensive extra-oral examination, and 7% were performing a comprehensive intra-oral examination. The respondents identified several barriers that prevented them from completing a comprehensive oral cancer screening. Early detection of oral cancer reduces mortality rates so there is a professional responsibility to ensure that comprehensive oral cancer screenings are being performed on patients. Dental hygienists may not have the authority in a dental practice to overcome all of the barriers that are preventing them from performing these screenings. Public awareness about oral cancer screenings could increase the demand for screenings and thereby play a role in changing practice norms. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Refractive errors in a Brazilian population: age and sex distribution.
Ferraz, Fabio H; Corrente, José E; Opromolla, Paula; Padovani, Carlos Roberto; Schellini, Silvana A
2015-01-01
To determine the prevalence of refractive errors and their distribution according to age and sex in a Brazilian population. This population-based cross-sectional study involved 7654 Brazilian inhabitants of nine municipalities of Sao Paulo State, Brazil, between March 2004 and July 2005. Participants aged >1 year were selected using a random, stratified, household cluster sampling technique, excluding individuals with previous refractive or cataract surgery. Myopia was defined as spherical equivalent (SE) ≤-0.5D, high myopia as SE ≤-3.0D, hyperopia as SE ≥+0.5D, high hyperopia as SE ≥+3D, astigmatism as ≤-0.5DC and anisometropia as ≥1.0D difference between eyes. Age, sex, complaints and a comprehensive eye examination including cycloplegic refraction test were collected and analysed using descriptive analysis, univariate and multivariate methods. The prevalence of astigmatism was 59.7%, hyperopia 33.8% and myopia was 25.3%. Astigmatism had a progressive increase with age. With-the-rule (WTR) axes of astigmatism were more frequently observed in the young participants and the against-the-rule (ATR) axes were more frequent in the older subjects. The onset of myopia occurred more frequently between the 2nd and 3rd decades of life. Anisometropia showed a prevalence of 13.2% (95% CI 12.4-13.9; p < 0.001). There was an association between age and all types of refractive error and hyperopia was also associated with sex. Hyperopia was associated with WTR axes (odds ratio 0.73; 95% CI: 0.6-0.8; p < 0.001) and myopia with ATR axes (odds ratio 0.66; 95% CI: 0.6-0.8; p < 0.001). Astigmatism was the most prevalent refractive error in a Brazilian population. There was a strong relationship between age and all refractive errors and between hyperopia and sex. WTR astigmatism was more frequently associated with hyperopia and ATR astigmatism with myopia. The vast majority of participants had low-grade refractive error, which favours planning aimed at correction of refractive error in the population. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk
Willis, Matt; Sun, Peiyuan; Wang, Jun
2013-01-01
Background Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the “turkers”. Objective To answer two research questions: (1) Is the turkers’ collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers’ demographic characteristics affect their performance in medical pictogram comprehension? Methods We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers’ guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers’ interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers’ demographic characteristics and their pictogram comprehension performance. Results The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response–based open-ended testing with local people. The turkers’ misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. Conclusions The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers’ misunderstandings overlap with those elicited from low-literate people. PMID:23732572
Crowdsourcing participatory evaluation of medical pictograms using Amazon Mechanical Turk.
Yu, Bei; Willis, Matt; Sun, Peiyuan; Wang, Jun
2013-06-03
Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the "turkers". To answer two research questions: (1) Is the turkers' collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers' demographic characteristics affect their performance in medical pictogram comprehension? We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers' guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers' interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers' demographic characteristics and their pictogram comprehension performance. The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response-based open-ended testing with local people. The turkers' misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers' misunderstandings overlap with those elicited from low-literate people.
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students
NASA Astrophysics Data System (ADS)
Priyani, H. A.; Ekawati, R.
2018-01-01
Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart
2010-05-28
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
NASA Astrophysics Data System (ADS)
Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman
2010-05-01
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.
Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T
2007-03-01
Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Dosis, Aristotelis; Bello, Fernando; Moorthy, Krishna; Munz, Yaron; Gillies, Duncan; Darzi, Ara
2004-01-01
Surgical dexterity in operating theatres has traditionally been assessed subjectively. Electromagnetic (EM) motion tracking systems such as the Imperial College Surgical Assessment Device (ICSAD) have been shown to produce valid and accurate objective measures of surgical skill. To allow for video integration we have modified the data acquisition and built it within the ROVIMAS analysis software. We then used ActiveX 9.0 DirectShow video capturing and the system clock as a time stamp for the synchronized concurrent acquisition of kinematic data and video frames. Interactive video/motion data browsing was implemented to allow the user to concentrate on frames exhibiting certain kinematic properties that could result in operative errors. We exploited video-data synchronization to calculate the camera visual hull by identifying all 3D vertices using the ICSAD electromagnetic sensors. We also concentrated on high velocity peaks as a means of identifying potential erroneous movements to be confirmed by studying the corresponding video frames. The outcome of the study clearly shows that the kinematic data are precisely synchronized with the video frames and that the velocity peaks correspond to large and sudden excursions of the instrument tip. We validated the camera visual hull by both video and geometrical kinematic analysis and we observed that graphs containing fewer sudden velocity peaks are less likely to have erroneous movements. This work presented further developments to the well-established ICSAD dexterity analysis system. Synchronized real-time motion and video acquisition provides a comprehensive assessment solution by combining quantitative motion analysis tools and qualitative targeted video scoring.
Sheehan, David V; Giddens, Jennifer M; Sheehan, Kathy Harnett
2014-09-01
Standard international classification criteria require that classification categories be comprehensive to avoid type II error. Categories should be mutually exclusive and definitions should be clear and unambiguous (to avoid type I and type II errors). In addition, the classification system should be robust enough to last over time and provide comparability between data collections. This article was designed to evaluate the extent to which the classification system contained in the United States Food and Drug Administration 2012 Draft Guidance for the prospective assessment and classification of suicidal ideation and behavior in clinical trials meets these criteria. A critical review is used to assess the extent to which the proposed categories contained in the Food and Drug Administration 2012 Draft Guidance are comprehensive, unambiguous, and robust. Assumptions that underlie the classification system are also explored. The Food and Drug Administration classification system contained in the 2012 Draft Guidance does not capture the full range of suicidal ideation and behavior (type II error). Definitions, moreover, are frequently ambiguous (susceptible to multiple interpretations), and the potential for misclassification (type I and type II errors) is compounded by frequent mismatches in category titles and definitions. These issues have the potential to compromise data comparability within clinical trial sites, across sites, and over time. These problems need to be remedied because of the potential for flawed data output and consequent threats to public health, to research on the safety of medications, and to the search for effective medication treatments for suicidality.
Problems in evaluating radiation dose via terrestrial and aquatic pathways.
Vaughan, B E; Soldat, J K; Schreckhise, R G; Watson, E C; McKenzie, D H
1981-01-01
This review is concerned with exposure risk and the environmental pathways models used for predictive assessment of radiation dose. Exposure factors, the adequacy of available data, and the model subcomponents are critically reviewed from the standpoint of absolute error propagation. Although the models are inherently capable of better absolute accuracy, a calculated dose is usually overestimated by from two to six orders of magnitude, in practice. The principal reason for so large an error lies in using "generic" concentration ratios in situations where site specific data are needed. Major opinion of the model makers suggests a number midway between these extremes, with only a small likelihood of ever underestimating the radiation dose. Detailed evaluations are made of source considerations influencing dose (i.e., physical and chemical status of released material); dispersal mechanisms (atmospheric, hydrologic and biotic vector transport); mobilization and uptake mechanisms (i.e., chemical and other factors affecting the biological availability of radioelements); and critical pathways. Examples are shown of confounding in food-chain pathways, due to uncritical application of concentration ratios. Current thoughts of replacing the critical pathways approach to calculating dose with comprehensive model calculations are also shown to be ill-advised, given present limitations in the comprehensive data base. The pathways models may also require improved parametrization, as they are not at present structured adequately to lend themselves to validation. The extremely wide errors associated with predicting exposure stand in striking contrast to the error range associated with the extrapolation of animal effects data to the human being. PMID:7037381
Second Language Acquisition Research and Second Language Teaching.
ERIC Educational Resources Information Center
Corder, S. Pit
1985-01-01
Discusses second language acquisition, the importance of comprehensible input to this acquisition, and the inadequacy of the theory of language interference as an explanation for errors in second language speech. The role of the teacher in the language classroom and the "procedural syllabus" are described. (SED)
Review of Research on Sight Word Instruction.
ERIC Educational Resources Information Center
Browder, Diane M.; Lalli, Joseph S.
1991-01-01
This review of 20 years of literature on sight word instruction for individuals with handicaps examines effectiveness data for procedures teaching word recognition and comprehension. Covered are "errorless procedures," prompt elimination, stimulus fading, time delay, easy to hard discrimination, and trial and error with feedback. Two tables…
Transfer of uncertainty of space-borne high resolution rainfall products at ungauged regions
NASA Astrophysics Data System (ADS)
Tang, Ling
Hydrologically relevant characteristics of high resolution (˜ 0.25 degree, 3 hourly) satellite rainfall uncertainty were derived as a function of season and location using a six year (2002-2007) archive of National Aeronautics and Space Administration (NASA)'s Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) precipitation data. The Next Generation Radar (NEXRAD) Stage IV rainfall data over the continental United States was used as ground validation (GV) data. A geostatistical mapping scheme was developed and tested for transfer (i.e., spatial interpolation) of uncertainty information from GV regions to the vast non-GV regions by leveraging the error characterization work carried out in the earlier step. The open question explored here was, "If 'error' is defined on the basis of independent ground validation (GV) data, how are error metrics estimated for a satellite rainfall data product without the need for much extensive GV data?" After a quantitative analysis of the spatial and temporal structure of the satellite rainfall uncertainty, a proof-of-concept geostatistical mapping scheme (based on the kriging method) was evaluated. The idea was to understand how realistic the idea of 'transfer' is for the GPM era. It was found that it was indeed technically possible to transfer error metrics from a gauged to an ungauged location for certain error metrics and that a regionalized error metric scheme for GPM may be possible. The uncertainty transfer scheme based on a commonly used kriging method (ordinary kriging) was then assessed further at various timescales (climatologic, seasonal, monthly and weekly), and as a function of the density of GV coverage. The results indicated that if a transfer scheme for estimating uncertainty metrics was finer than seasonal scale (ranging from 3-6 hourly to weekly-monthly), the effectiveness for uncertainty transfer worsened significantly. Next, a comprehensive assessment of different kriging methods for spatial transfer (interpolation) of error metrics was performed. Three kriging methods for spatial interpolation are compared, which are: ordinary kriging (OK), indicator kriging (IK) and disjunctive kriging (DK). Additional comparison with the simple inverse distance weighting (IDW) method was also performed to quantify the added benefit (if any) of using geostatistical methods. The overall performance ranking of the kriging methods was found to be as follows: OK=DK > IDW > IK. Lastly, various metrics of satellite rainfall uncertainty were identified for two large continental landmasses that share many similar Koppen climate zones, United States and Australia. The dependence of uncertainty as a function of gauge density was then investigated. The investigation revealed that only the first and second ordered moments of error are most amenable to a Koppen-type climate type classification in different continental landmasses.
Evaluation of a Web-based Error Reporting Surveillance System in a Large Iranian Hospital.
Askarian, Mehrdad; Ghoreishi, Mahboobeh; Akbari Haghighinejad, Hourvash; Palenik, Charles John; Ghodsi, Maryam
2017-08-01
Proper reporting of medical errors helps healthcare providers learn from adverse incidents and improve patient safety. A well-designed and functioning confidential reporting system is an essential component to this process. There are many error reporting methods; however, web-based systems are often preferred because they can provide; comprehensive and more easily analyzed information. This study addresses the use of a web-based error reporting system. This interventional study involved the application of an in-house designed "voluntary web-based medical error reporting system." The system has been used since July 2014 in Nemazee Hospital, Shiraz University of Medical Sciences. The rate and severity of errors reported during the year prior and a year after system launch were compared. The slope of the error report trend line was steep during the first 12 months (B = 105.727, P = 0.00). However, it slowed following launch of the web-based reporting system and was no longer statistically significant (B = 15.27, P = 0.81) by the end of the second year. Most recorded errors were no-harm laboratory types and were due to inattention. Usually, they were reported by nurses and other permanent employees. Most reported errors occurred during morning shifts. Using a standardized web-based error reporting system can be beneficial. This study reports on the performance of an in-house designed reporting system, which appeared to properly detect and analyze medical errors. The system also generated follow-up reports in a timely and accurate manner. Detection of near-miss errors could play a significant role in identifying areas of system defects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noel, Camille E.; Gutti, VeeraRajesh; Bosch, Walter
Purpose: To quantify the potential impact of the Integrating the Healthcare Enterprise–Radiation Oncology Quality Assurance with Plan Veto (QAPV) on patient safety of external beam radiation therapy (RT) operations. Methods and Materials: An institutional database of events (errors and near-misses) was used to evaluate the ability of QAPV to prevent clinically observed events. We analyzed reported events that were related to Digital Imaging and Communications in Medicine RT plan parameter inconsistencies between the intended treatment (on the treatment planning system) and the delivered treatment (on the treatment machine). Critical Digital Imaging and Communications in Medicine RT plan parameters were identified.more » Each event was scored for importance using the Failure Mode and Effects Analysis methodology. Potential error occurrence (frequency) was derived according to the collected event data, along with the potential event severity, and the probability of detection with and without the theoretical implementation of the QAPV plan comparison check. Failure Mode and Effects Analysis Risk Priority Numbers (RPNs) with and without QAPV were compared to quantify the potential benefit of clinical implementation of QAPV. Results: The implementation of QAPV could reduce the RPN values for 15 of 22 (71%) of evaluated parameters, with an overall average reduction in RPN of 68 (range, 0-216). For the 6 high-risk parameters (>200), the average reduction in RPN value was 163 (range, 108-216). The RPN value reduction for the intermediate-risk (200 > RPN > 100) parameters was (0-140). With QAPV, the largest RPN value for “Beam Meterset” was reduced from 324 to 108. The maximum reduction in RPN value was for Beam Meterset (216, 66.7%), whereas the maximum percentage reduction was for Cumulative Meterset Weight (80, 88.9%). Conclusion: This analysis quantifies the value of the Integrating the Healthcare Enterprise–Radiation Oncology QAPV implementation in clinical workflow. We demonstrate that although QAPV does not provide a comprehensive solution for error prevention in RT, it can have a significant impact on a subset of the most severe clinically observed events.« less
Study on a novel panel support concept for radio telescopes with active surface
NASA Astrophysics Data System (ADS)
Yang, Dehua; Zhou, Guohua; Okoh, Daniel; Li, Guoping; Cheng, Jingquan
2010-07-01
Generally, panels of radio telescopes are mainly shaped in trapezoid and each is supported/positioned by four adjustors beneath its vertexes. Such configuration of panel supporting system is essentially hyper-static, and the panel is overconstrained from a kinematic point of view. When the panel is to be adjusted and/or actuated, it will suffer stress from its adjusters and hence its shape is to be distorted. This situation is not desirable for high precision panels, such as glass based panels especially used for sub-millimeter and shorter wavelength telescopes with active optics/active panel technology. This paper began with a general overview of panel patterns and panel supports of existing radio telescopes. Thereby, we proposed a preferable master-slave active surface concept for triangular and/or hexagonal panel pattern. In addition, we carry out panel error sensitivity analysis for all the 6 degrees of freedom (DOF) of a panel to identify what DOFs are most sensitive for an active surface. And afterwards, based on the error sensitivity analysis, we suggested an innovative parallel-series concept hexapod well fitted for an active panel to correct for all of its 6 rigid errors. A demonstration active surface using the master-slave concept and the hexapod manifested a great save in cost, where only 486 precision actuators are needed for 438 panels, which is 37% of those actuators needed by classic segmented mirror active optics. Further, we put forward a swaying-arm based design concept for the related connecting joints between panels, which ensures that all the panels attached on to it free from over-constraints when they are positioned and/or actuated. Principle and performance of the swaying-arm connecting mechanism are elaborated before a practical cablemesh based prototype active surface is presented with comprehensive finite element analysis and simulation.
Zhang, Yin-Ping; Zhao, Xin-Shuang; Zhang, Bei; Zhang, Lu-Lu; Ni, Chun-Ping; Hao, Nan; Shi, Chang-Bei; Porr, Caroline
2015-07-01
The comprehensive needs assessment tool for cancer caregivers (CNAT-C) is a systematic and comprehensive needs assessment tool for the family caregivers. The purpose of this project was twofold: (1) to adapt the CNAT-C to Mainland China's cultural context and (2) to evaluate the psychometric properties of the newly adapted Chinese CNAT-C. Cross-cultural adaptation of the original CNAT-C was performed according to published guidelines. A pilot study was conducted in Mainland China with 30 Chinese family cancer caregivers. A subsequent validation study was conducted with 205 Chinese cancer caregivers from Mainland China. Construct validity was determined through exploratory and confirmatory factor analyses. Reliability was determined using internal consistency and test-retest reliability. The split-half coefficient for the overall Chinese CNAT-C scale was 0.77. Principal component analysis resulted in an eight-factor structure explaining 68.11 % of the total variance. The comparative fit index (CFI) was 0.91 from the modified model confirmatory factor analysis. The Chi-square divided by degrees of freedom was 1.98, and the root mean squared error of approximation (RMSEA) was 0.079. In relation to the known-group validation, significant differences were found in the Chinese CNAT-C scale according to various caregiver characteristics. Internal consistency was high for the Chinese CNAT-C reaching a Cronbach α value of 0.94. Test-retest reliability was 0.85. The newly adapted Chinese CNAT-C scale possesses adequate validity, test-retest reliability, and internal consistency and therefore may be used to ascertain holistic health and support needs of cancer patients' family caregivers in Mainland China.
Men, Hong; Fu, Songlin; Yang, Jialin; Cheng, Meiqi; Shi, Yan
2018-01-01
Paraffin odor intensity is an important quality indicator when a paraffin inspection is performed. Currently, paraffin odor level assessment is mainly dependent on an artificial sensory evaluation. In this paper, we developed a paraffin odor analysis system to classify and grade four kinds of paraffin samples. The original feature set was optimized using Principal Component Analysis (PCA) and Partial Least Squares (PLS). Support Vector Machine (SVM), Random Forest (RF), and Extreme Learning Machine (ELM) were applied to three different feature data sets for classification and level assessment of paraffin. For classification, the model based on SVM, with an accuracy rate of 100%, was superior to that based on RF, with an accuracy rate of 98.33–100%, and ELM, with an accuracy rate of 98.01–100%. For level assessment, the R2 related to the training set was above 0.97 and the R2 related to the test set was above 0.87. Through comprehensive comparison, the generalization of the model based on ELM was superior to those based on SVM and RF. The scoring errors for the three models were 0.0016–0.3494, lower than the error of 0.5–1.0 measured by industry standard experts, meaning these methods have a higher prediction accuracy for scoring paraffin level. PMID:29346328
A mathematical model for the transfer of soil solutes to runoff under water scouring.
Yang, Ting; Wang, Quanjiu; Wu, Laosheng; Zhang, Pengyu; Zhao, Guangxu; Liu, Yanli
2016-11-01
The transfer of nutrients from soil to runoff often causes unexpected pollution in water bodies. In this study, a mathematical model that relates to the detachment of soil particles by water flow and the degree of mixing between overland flow and soil nutrients was proposed. The model assumes that the mixing depth is an integral of average water flow depth, and it was evaluated by experiments with three water inflow rates to bare soil surfaces and to surfaces with eight treatments of different stone coverages. The model predicted outflow rates were compared with the experimentally observed data to test the accuracy of the infiltration parameters obtained by curve fitting the models to the data. Further analysis showed that the comprehensive mixing coefficient (ke) was linearly correlated with Reynolds' number Re (R(2)>0.9), and this relationship was verified by comparing the simulated potassium concentration and cumulative mass with observed data, respectively. The best performance with the bias error analysis (Nash Sutcliffe coefficient of efficiency (NS), relative error (RE) and the coefficient of determination (R(2))) showed that the predicted data by the proposed model was in good agreement with the measured data. Thus the model can be used to guide soil-water and fertilization management to minimize nutrient runoff from cropland. Copyright © 2016 Elsevier B.V. All rights reserved.
Outcome Assessments and Cost Avoidance of an Oral Chemotherapy Management Clinic.
Wong, Siu-Fun; Bounthavong, Mark; Nguyen, Cham P; Chen, Timothy
2016-03-01
Increasing use of oral chemotherapy drugs increases the challenges for drug and patient management. An oral chemotherapy management clinic was developed to provide patients with oral chemotherapy management, concurrent medication (CM) education, and symptom management services. This evaluation aims to measure the need and effectiveness of this practice model due to scarce published data. This is a case series report of all patients referred to the oral chemotherapy management clinic. Data collected included patient demographics, depression scores, CMs, and types of intervention, including detection and management outcomes collected at baseline, 3-day, 7-day, and 3-month follow-ups. Persistence rate was monitored. Secondary analysis assessed potential cost avoidance. A total of 86 evaluated patients (32 men and 54 women, mean age of 63.4 years) did not show a high risk for medication nonadherence. The 3 most common cancer diagnoses were rectal, pancreatic, and breast, with capecitabine most prescribed. Patients had an average of 13.7 CMs. A total of 125 interventions (detection and management of adverse drug event detection, compliance, drug interactions, medication error, and symptom management) occurred in 201 visits, with more than 75% of interventions occurring within the first 14 days. A persistence rate was observed in 78% of 41 evaluable patients. The total estimated annual cost avoidance per 1.0 full time employee (FTE) was $125,761.93. This evaluation demonstrated the need for additional support for patients receiving oral chemotherapy within standard of care medical service. A comprehensive oral chemotherapy management referral service can optimize patient care delivery via early interventions for adverse drug events, drug interactions, and medication errors up to 3 months after initiation of treatment. Copyright © 2016 by the National Comprehensive Cancer Network.
List, Susan M; Starks, Nykole; Baum, John; Greene, Carmine; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Cuddihy, Robert
2011-01-01
Background This study evaluated performance and product labeling of CONTOUR® USB, a new blood glucose monitoring system (BGMS) with integrated diabetes management software and a universal serial bus (USB) port, in the hands of untrained lay users and health care professionals (HCPs). Method Subjects and HCPs tested subject's finger stick capillary blood in parallel using CONTOUR USB meters; deep finger stick blood was tested on a Yellow Springs Instruments (YSI) glucose analyzer for reference. Duplicate results by both subjects and HCPs were obtained to assess system precision. System accuracy was assessed according to International Organization for Standardization (ISO) 15197:2003 guidelines [within ±15 mg/dl of mean YSI results (samples <75 mg/dl) and ±20% (samples ≥75 mg/dl)]. Clinical accuracy was determined by Parkes error grid analysis. Subject labeling comprehension was assessed by HCP ratings of subject proficiency. Key system features and ease-of-use were evaluated by subject questionnaires. Results All subjects who completed the study (N = 74) successfully performed blood glucose measurements, connected the meter to a laptop computer, and used key features of the system. The system was accurate; 98.6% (146/148) of subject results and 96.6% (143/148) of HCP results exceeded ISO 15197:2003 criteria. All subject and HCP results were clinically accurate (97.3%; zone A) or associated with benign errors (2.7%; zone B). The majority of subjects rated features of the BGMS as “very good” or “excellent.” Conclusions CONTOUR USB exceeded ISO 15197:2003 system performance criteria in the hands of untrained lay users. Subjects understood the product labeling, found the system easy to use, and successfully performed blood glucose testing. PMID:22027308
List, Susan M; Starks, Nykole; Baum, John; Greene, Carmine; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Cuddihy, Robert
2011-09-01
This study evaluated performance and product labeling of CONTOUR® USB, a new blood glucose monitoring system (BGMS) with integrated diabetes management software and a universal serial bus (USB) port, in the hands of untrained lay users and health care professionals (HCPs). Subjects and HCPs tested subject's finger stick capillary blood in parallel using CONTOUR USB meters; deep finger stick blood was tested on a Yellow Springs Instruments (YSI) glucose analyzer for reference. Duplicate results by both subjects and HCPs were obtained to assess system precision. System accuracy was assessed according to International Organization for Standardization (ISO) 15197:2003 guidelines [within ±15 mg/dl of mean YSI results (samples <75 mg/dl) and ±20% (samples ≥75 mg/dl)]. Clinical accuracy was determined by Parkes error grid analysis. Subject labeling comprehension was assessed by HCP ratings of subject proficiency. Key system features and ease-of-use were evaluated by subject questionnaires. All subjects who completed the study (N = 74) successfully performed blood glucose measurements, connected the meter to a laptop computer, and used key features of the system. The system was accurate; 98.6% (146/148) of subject results and 96.6% (143/148) of HCP results exceeded ISO 15197:2003 criteria. All subject and HCP results were clinically accurate (97.3%; zone A) or associated with benign errors (2.7%; zone B). The majority of subjects rated features of the BGMS as "very good" or "excellent." CONTOUR USB exceeded ISO 15197:2003 system performance criteria in the hands of untrained lay users. Subjects understood the product labeling, found the system easy to use, and successfully performed blood glucose testing. © 2011 Diabetes Technology Society.
de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio
2011-08-05
The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data. Copyright © 2011 Elsevier B.V. All rights reserved.
Thompson, David A; Marsteller, Jill A; Pronovost, Peter J; Gurses, Ayse; Lubomski, Lisa H; Goeschel, Christine A; Gosbee, John W; Wahr, Joyce; Martinez, Elizabeth A
2015-09-01
The objectives were to develop a scientifically sound and feasible peer-to-peer assessment model that allows health-care organizations to evaluate patient safety in cardiovascular operating rooms and to establish safety priorities for improvement. The locating errors through networked surveillance study was conducted to identify hazards in cardiac surgical care. A multidisciplinary team, composed of organizational sociology, organizational psychology, applied social psychology, clinical medicine, human factors engineering, and health services researchers, conducted the study. We used a transdisciplinary approach, which integrated the theories, concepts, and methods from each discipline, to develop comprehensive research methods. Multiple data collection was involved: focused literature review of cardiac surgery-related adverse events, retrospective analysis of cardiovascular events from a national database in the United Kingdom, and prospective peer assessment at 5 sites, involving survey assessments, structured interviews, direct observations, and contextual inquiries. A nominal group methodology, where one single group acts to problem solve and make decisions was used to review the data and develop a list of the top priority hazards. The top 6 priority hazard themes were as follows: safety culture, teamwork and communication, infection prevention, transitions of care, failure to adhere to practices or policies, and operating room layout and equipment. We integrated the theories and methods of a diverse group of researchers to identify a broad range of hazards and good clinical practices within the cardiovascular surgical operating room. Our findings were the basis for a plan to prioritize improvements in cardiac surgical care. These study methods allowed for the comprehensive assessment of a high-risk clinical setting that may translate to other clinical settings.
ERIC Educational Resources Information Center
Spencer, Mercedes; Wagner, Richard K.
2018-01-01
The purpose of this meta-analysis was to examine the comprehension problems of children who have a specific reading comprehension deficit (SCD), which is characterized by poor reading comprehension despite adequate decoding. The meta-analysis included 86 studies of children with SCD who were assessed in reading comprehension and oral language…
Integrated modeling analysis of a novel hexapod and its application in active surface
NASA Astrophysics Data System (ADS)
Yang, Dehua; Zago, Lorenzo; Li, Hui; Lambert, Gregory; Zhou, Guohua; Li, Guoping
2011-09-01
This paper presents the concept and integrated modeling analysis of a novel mechanism, a 3-CPS/RPPS hexapod, for supporting segmented reflectors for radio telescopes and eventually segmented mirrors of optical telescopes. The concept comprises a novel type of hexapod with an original organization of actuators hence degrees of freedom, based on a swaying arm based design concept. Afterwards, with specially designed connecting joints between panels/segments, an iso-static master-slave active surface concept can be achieved for any triangular and/or hexagonal panel/segment pattern. The integrated modeling comprises all the multifold sizing and performance aspects which must be evaluated concurrently in order to optimize and validate the design and the configuration. In particular, comprehensive investigation of kinematic behavior, dynamic analysis, wave-front error and sensitivity analysis are carried out, where, frequently used tools like MATLAB/SimMechanics, CALFEM and ANSYS are used. Especially, we introduce the finite element method as a competent approach for analyses of the multi-degree of freedom mechanism. Some experimental verifications already performed validating single aspects of the integrated concept are also presented with the results obtained.
Assessment of meteorological uncertainties as they apply to the ASCENDS mission
NASA Astrophysics Data System (ADS)
Snell, H. E.; Zaccheo, S.; Chase, A.; Eluszkiewicz, J.; Ott, L. E.; Pawson, S.
2011-12-01
Many environment-oriented remote sensing and modeling applications require precise knowledge of the atmospheric state (temperature, pressure, water vapor, surface pressure, etc.) on a fine spatial grid with a comprehensive understanding of the associated errors. Coincident atmospheric state measurements may be obtained via co-located remote sensing instruments or by extracting these data from ancillary models. The appropriate technique for a given application depends upon the required accuracy. State-of-the-art mesoscale/regional numerical weather prediction (NWP) models operate on spatial scales of a few kilometers resolution, and global scale NWP models operate on scales of tens of kilometers. Remote sensing measurements may be made on spatial scale comparable to the measurement of interest. These measurements normally require a separate sensor, which increases the overall size, weight, power and complexity of the satellite payload. Thus, a comprehensive understanding of the errors associated with each of these approaches is a critical part of the design/characterization of a remote-sensing system whose measurement accuracy depends on knowledge of the atmospheric state. One of the requirements as part of the overall ASCENDS (Active Sensing of CO2 Emissions over Nights, Days, and Seasons) mission development is to develop a consistent set of atmospheric state variables (vertical temperature and water vapor profiles, and surface pressure) for use in helping to constrain overall retrieval error budget. If the error budget requires tighter uncertainties on ancillary atmospheric parameters than can be provided by NWP models and analyses, additional sensors may be required to reduce the overall measurement error and meet mission requirements. To this end we have used NWP models and reanalysis information to generate a set of atmospheric profiles which contain reasonable variability. This data consists of a "truth" set and a companion "measured" set of profiles. The truth set contains climatologically-relevant profiles of pressure, temperature and humidity with an accompanying surface pressure. The measured set consists of some number of instances of the truth set which have been perturbed to represent realistic measurement uncertainty for the truth profile using measurement error covariance matrices. The primary focus has been to develop matrices derived using information about the profile retrieval accuracy as documented for on-orbit sensor systems including AIRS, AMSU, ATMS, and CrIS. Surface pressure variability and uncertainty was derived from globally-compiled station pressure information. We generated an additional measurement set of profiles which represent the overall error within NWP models. These profile sets will allow for comprehensive trade studies for sensor system design and provide a basis for setting measurement requirements for co-located temperature, humidity sounders, determine the utility of NWP data to either replace or supplement collocated measurements, and to assess the overall end-to-end system performance of the sensor system. In this presentation we discuss the process by which we created these data sets and show their utility in performing trade studies for sensor system concepts and designs.
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
A General Chemistry Demonstration: Student Observations and Explanations.
ERIC Educational Resources Information Center
Silberman, Robert G.
1983-01-01
Out of 70 answers to questions concerning the chemistry involved in an "orange tornado" demonstration, only 10 were partially correct, others totally wrong or showing major errors in understanding, comprehension, and/or reasoning. Demonstration and reactions involved, selected incorrect answers, and a substantially correct answer are discussed.…
Correcting English Learner's Suprasegmental Errors
ERIC Educational Resources Information Center
Yurtbasi, Metin
2017-01-01
The main cause of pronunciation problems faced by EFL learners is their lack of a suprasegmental background. Most of those having oral comprehension and expression difficulties are unaware that their difficulty comes from their negligence of concepts of stress, pitch, juncture and linkers. While remedying stress problems, students should be taught…
Analyzing the Reading Skills and Visual Perception Levels of First Grade Students
ERIC Educational Resources Information Center
Çayir, Aybala
2017-01-01
The purpose of this study was to analyze primary school first grade students' reading levels and correlate their visual perception skills. For this purpose, students' reading speed, reading comprehension and reading errors were determined using The Informal Reading Inventory. Students' visual perception levels were also analyzed using…
Listening versus Reading in Monitoring Comprehension.
ERIC Educational Resources Information Center
Yussen, Steven R.; And Others
Noting the differences in processing information by reading and by listening, two studies examined subjects' ability to detect errors in written and oral prose. In both experiments, college students were presented with four expository passages drawn from different written sources. All passages were approximately 300 words and 5 paragraphs long,…
Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena
2012-01-01
Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.
Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B
2017-01-01
In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego
2017-12-01
Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.
Ying, Gui-shuang; Maguire, Maureen G.; Kulp, Marjean Taylor; Ciner, Elise; Moore, Bruce; Pistilli, Maxwell; Candy, Rowan
2017-01-01
PURPOSE To evaluate the agreement of cycloplegic refractive error measures between the Grand Seiko and Retinomax autorefractors in 4- and 5-year-old children. METHODS Cycloplegic refractive error of children was measured using the Grand Seiko and Retinomax during a comprehensive eye examination. Accommodative error was measured using the Grand Seiko. The differences in sphere, cylinder, spherical equivalent (SE) and intereye vector dioptric distance (VDD) between autorefractors were assessed using the Bland-Altman plot and 95% limits of agreement (95% LoA). RESULTS A total of 702 examinations were included. Compared to the Retinomax, the Grand Seiko provided statistically significantly larger values of sphere (mean difference, 0.34 D; 95% LoA, −0.46 to 1.14 D), SE (mean, 0.25 D; 95% LoA, −0.55 to 1.05 D), VDD (mean, 0.19 D; 95% LoA, −0.67 to 1.05 D), and more cylinder (mean, −0.18 D; 95% LoA, −0.91 to 0.55 D). The Grand Seiko measured ≥0.5 D than Retinomax in 43.1% of eyes for sphere and 29.8% of eyes for SE. In multivariate analysis, eyes with SE of >4 D (based on the average of two autorefractors) had larger differences in sphere (mean, 0.66 D vs 0.35 D; P < 0.0001) and SE (0.57 D vs 0.26 D; P < 0.0001) than eyes with SE of ≤4 D. CONCLUSIONS Under cycloplegia, the Grand Seiko provided higher measures of sphere, more cylinder, and higher SE than the Retinomax. Higher refractive error was associated with larger differences in sphere and SE between the Grand Seiko and Retinomax. (J AAPOS 2017;21: 219–223) PMID:28528993
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
St Elsewhere's or St Everywhere's: improving patient throughput in the private hospital sector.
Laffey, Jennifer A; Wasson, Moran
2007-01-01
Communication errors have been found to be most common root cause of medical errors by the US-based Agency for Healthcare Research and Quality [1]. Although elective admissions to hospital involves a high volume of important healthcare communications where incorrect, missing or illegible information could result in a serious medical error, there is little published research on the impact of improving pre-admission communication flow between admitting doctors and hospitals. The Sydney Adventist Hospital (the San) is a 341-bed private hospital in Sydney's northern suburbs that provides a comprehensive range of health services. A process improvement program began in early 2005 to streamline preadmission communications. The objectives of this ongoing program are broadly to improve patient safety and to increase operating efficiency. The first major initiative within this program was to implement a standardised method for inpatient booking/referral with over three hundred admitting doctors. Eighteen months on, the hospital has been able to demonstrate a significant shift in the timeliness of patient bookings from specialists' rooms, more comprehensive provision of clinical indicators that can facilitate resource planning in operating theatres and on the wards, and reduction in the ratio of bookings made in areas other than the hospital bookings department. The program continues with focus on improving accuracy of data entry, rationalising patient forms, making more effective use of information received and automation of pre-admission information flows.
NASA Astrophysics Data System (ADS)
Dolman, A. M.; Laepple, T.; Kunz, T.
2017-12-01
Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.
Tomasino, Barbara; Marin, Dario; Maieron, Marta; D'Agostini, Serena; Fabbro, Franco; Skrap, Miran; Luzzatti, Claudio
2015-12-01
Neuropsychological data about acquired impairments in reading and writing provide a strong basis for the theoretical framework of the dual-route models. The present study explored the functional neuroanatomy of the reading and spelling processing system. We describe the reading and writing performance of patient CF, an Italian native speaker who developed an extremely selective reading and spelling deficit (his spontaneous speech, oral comprehension, repetition and oral picture naming were almost unimpaired) in processing double letters associated with surface dyslexia and dysgraphia, following a tumor in the left temporal lobe. In particular, the majority of CF's errors in spelling were phonologically plausible substitutions, errors concerning letter numerosity of consonants, and syllabic phoneme-to-grapheme conversion (PGC) errors. A similar pattern of impairment also emerged in his reading behavior, with a majority of lexical stress errors (the only possible type of surface reading errors in the Italian language, due the extreme regularity of print-to-sound correspondence). CF's neuropsychological profile was combined with structural neuroimaging data, fiber tracking, and functional maps and compared to that of healthy control participants. We related CF's deficit to a dissociation between impaired ventral/lexical route (as evidenced by a fractional anisotropy - FA decrease along the inferior fronto-occipital fasciculus - IFOF) and relatively preserved dorsal/phonological route (as evidenced by a rather full integrity of the superior longitudinal fasciculus - SLF). In terms of functional processing, the lexical-semantic ventral route network was more activated in controls than in CF, while the network supporting the dorsal route was shared by CF and the control participants. Our results are discussed within the theoretical framework of dual-route models of reading and spelling, emphasize the importance of the IFOF both in lexical reading and spelling, and offer a better comprehension of the neurological and functional substrates involved in written language and, in particular, in surface dyslexia and dysgraphia and in doubling/de-doubling consonant sounds and letters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Phonological learning in semantic dementia.
Jefferies, Elizabeth; Bott, Samantha; Ehsan, Sheeba; Lambon Ralph, Matthew A
2011-04-01
Patients with semantic dementia (SD) have anterior temporal lobe (ATL) atrophy that gives rise to a highly selective deterioration of semantic knowledge. Despite pronounced anomia and poor comprehension of words and pictures, SD patients have well-formed, fluent speech and normal digit span. Given the intimate connection between phonological STM and word learning revealed by both neuropsychological and developmental studies, SD patients might be expected to show good acquisition of new phonological forms, even though their ability to map these onto meanings is impaired. In contradiction of these predictions, a limited amount of previous research has found poor learning of new phonological forms in SD. In a series of experiments, we examined whether SD patient, GE, could learn novel phonological sequences and, if so, under which circumstances. GE showed normal benefits of phonological knowledge in STM (i.e., normal phonotactic frequency and phonological similarity effects) but reduced support from semantic memory (i.e., poor immediate serial recall for semantically degraded words, characterised by frequent item errors). Next, we demonstrated normal learning of serial order information for repeated lists of single-digit number words using the Hebb paradigm: these items were well-understood allowing them to be repeated without frequent item errors. In contrast, patient GE showed little learning of nonsense syllable sequences using the same Hebb paradigm. Detailed analysis revealed that both GE and the controls showed a tendency to learn their own errors as opposed to the target items. Finally, we showed normal learning of phonological sequences for GE when he was prevented from repeating his errors. These findings confirm that the ATL atrophy in SD disrupts phonological processing for semantically degraded words but leaves the phonological architecture intact. Consequently, when item errors are minimised, phonological STM can support the acquisition of new phoneme sequences in patients with SD. Copyright © 2011 Elsevier Ltd. All rights reserved.
Yoshikawa, Munemitsu; Yamashiro, Kenji; Miyake, Masahiro; Oishi, Maho; Akagi-Kurashige, Yumiko; Kumagai, Kyoko; Nakata, Isao; Nakanishi, Hideo; Oishi, Akio; Gotoh, Norimoto; Yamada, Ryo; Matsuda, Fumihiko; Yoshimura, Nagahisa
2014-10-21
We investigated the association between refractive error in a Japanese population and myopia-related genes identified in two recent large-scale genome-wide association studies. Single-nucleotide polymorphisms (SNPs) in 51 genes that were reported by the Consortium for Refractive Error and Myopia and/or the 23andMe database were genotyped in 3712 healthy Japanese volunteers from the Nagahama Study using HumanHap610K Quad, HumanOmni2.5M, and/or HumanExome Arrays. To evaluate the association between refractive error and recently identified myopia-related genes, we used three approaches to perform quantitative trait locus analyses of mean refractive error in both eyes of the participants: per-SNP, gene-based top-SNP, and gene-based all-SNP analyses. Association plots of successfully replicated genes also were investigated. In our per-SNP analysis, eight myopia gene associations were replicated successfully: GJD2, RASGRF1, BICC1, KCNQ5, CD55, CYP26A1, LRRC4C, and B4GALNT2.Seven additional gene associations were replicated in our gene-based analyses: GRIA4, BMP2, QKI, BMP4, SFRP1, SH3GL2, and EHBP1L1. The signal strength of the reported SNPs and their tagging SNPs increased after considering different linkage disequilibrium patterns across ethnicities. Although two previous studies suggested strong associations between PRSS56, LAMA2, TOX, and RDH5 and myopia, we could not replicate these results. Our results confirmed the significance of the myopia-related genes reported previously and suggested that gene-based replication analyses are more effective than per-SNP analyses. Our comparison with two previous studies suggested that BMP3 SNPs cause myopia primarily in Caucasian populations, while they may exhibit protective effects in Asian populations. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
Sansavini, Alessandra; Bello, Arianna; Guarini, Annalisa; Savini, Silvia; Alessandroni, Rosina; Faldella, Giacomo; Caselli, Cristina
2015-01-01
Extremely low gestational age (ELGA, GA<28 weeks) preterm children are at high risk for linguistic impairments; however, their lexical comprehension and production as well as lexical categories in their early language acquisition have not been specifically examined via direct tools. Our study examines lexical comprehension and production as well as gestural production in ELGA children by focusing on noun and predicate acquisition. Forty monolingual ELGA children (mean GA of 26.7 weeks) and 40 full-term (FT) children were assessed at two years of corrected chronological age (CCA) using a test of noun and predicate comprehension and production (PiNG) and the Italian MB-CDI. Noun comprehension and production were delayed in ELGA compared with FT children, as documented by the low number of correct responses and the large number of errors, i.e., incorrect responses and no-response items, and by the types of incorrect responses, i.e., fewer semantically related responses, in noun production. Regarding predicate comprehension and production, a higher frequency of no responses was reported by ELGA children and these children also presented a lower frequency of bimodal spoken-gestural responses in predicate production than FT children. A delayed vocabulary size as demonstrated by the MB-CDI, was exhibited by one-fourth of the ELGA children, who were also unable to complete the predicate subtest. These findings highlight that noun comprehension and production are delayed in ELGA children at two years of CCA and are the most important indexes for the direct evaluation of their lexical abilities and delay. The types of incorrect responses and bimodal spoken-gestural responses were proven to be useful indexes for evaluating the noun and predicate level of acquisition and to plan early focused interventions. After reading this manuscript, the reader will understand (a) the differences in noun and predicate comprehension and production between ELGA and FT children and the indexes of lexical delays exhibited by ELGA children at 2;0 (CCA); (b) the relevance of evaluating errors (incorrect response and no response), the types of incorrect responses (semantically related and unrelated) and the modality of the responses (unimodal spoken and bimodal spoken-gestural) in noun and predicate production to understand the difficulties experienced by ELGA children in representing and expressing meanings; and (c) the need to plan specific interventions to support spoken and gestural modalities in lexical comprehension and production in ELGA children by focusing on noun and predicate acquisition. Copyright © 2015 Elsevier Inc. All rights reserved.
Horberry, Tim; Teng, Yi-Chun; Ward, James; Patil, Vishal; Clarkson, P John
2014-01-01
Central Venous Catheterisation (CVC) has occasionally been associated with cases of retained guidewires in patients after surgery. In theory, this is a completely avoidable complication; however, as with any human procedure, operator error leading to guidewires being occasionally retained cannot be fully eliminated. The work described here investigated the issue in an attempt to better understand it both from an operator and a systems perspective, and to ultimately recommend appropriate safe design solutions that reduce guidewire retention errors. Nine distinct methods were used: observations of the procedure, a literature review, interviewing CVC end-users, task analysis construction, CVC procedural audits, two human reliability assessments, usability heuristics and a comprehensive solution survey with CVC end-users. The three solutions that operators rated most highly, in terms of both practicality and effectiveness, were: making trainees better aware of the potential guidewire complications and strongly emphasising guidewire removal in CVC training, actively checking that the guidewire is present in the waste tray for disposal, and standardising purchase of central line sets so that differences that may affect chances of guidewire loss is minimised. Further work to eliminate/engineer out the possibility of guidewires being retained is proposed.
Caraus, Iurie; Alsuwailem, Abdulaziz A; Nadon, Robert; Makarenkov, Vladimir
2015-11-01
Significant efforts have been made recently to improve data throughput and data quality in screening technologies related to drug design. The modern pharmaceutical industry relies heavily on high-throughput screening (HTS) and high-content screening (HCS) technologies, which include small molecule, complementary DNA (cDNA) and RNA interference (RNAi) types of screening. Data generated by these screening technologies are subject to several environmental and procedural systematic biases, which introduce errors into the hit identification process. We first review systematic biases typical of HTS and HCS screens. We highlight that study design issues and the way in which data are generated are crucial for providing unbiased screening results. Considering various data sets, including the publicly available ChemBank data, we assess the rates of systematic bias in experimental HTS by using plate-specific and assay-specific error detection tests. We describe main data normalization and correction techniques and introduce a general data preprocessing protocol. This protocol can be recommended for academic and industrial researchers involved in the analysis of current or next-generation HTS data. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tamilarasan, Ilavarasan; Saminathan, Brindha; Murugappan, Meenakshi
2016-04-01
The past decade has seen the phenomenal usage of orthogonal frequency division multiplexing (OFDM) in the wired as well as wireless communication domains, and it is also proposed in the literature as a future proof technique for the implementation of flexible resource allocation in cognitive optical networks. Fiber impairment assessment and adaptive compensation becomes critical in such implementations. A comprehensive analytical model for impairments in OFDM-based fiber links is developed. The proposed model includes the combined impact of laser phase fluctuations, fiber dispersion, self phase modulation, cross phase modulation, four-wave mixing, the nonlinear phase noise due to the interaction of amplified spontaneous emission with fiber nonlinearities, and the photodetector noises. The bit error rate expression for the proposed model is derived based on error vector magnitude estimation. The performance analysis of the proposed model is presented and compared for dispersion compensated and uncompensated backbone/backhaul links. The results suggest that OFDM would perform better for uncompensated links than the compensated links due to the negligible FWM effects and there is a need for flexible compensation. The proposed model can be employed in cognitive optical networks for accurate assessment of fiber-related impairments.
Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods
NASA Astrophysics Data System (ADS)
Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric
2018-03-01
Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Analytical functions to predict cosmic-ray neutron spectra in the atmosphere.
Sato, Tatsuhiko; Niita, Koji
2006-09-01
Estimation of cosmic-ray neutron spectra in the atmosphere has been an essential issue in the evaluation of the aircrew doses and the soft-error rates of semiconductor devices. We therefore performed Monte Carlo simulations for estimating neutron spectra using the PHITS code in adopting the nuclear data library JENDL-High-Energy file. Excellent agreements were observed between the calculated and measured spectra for a wide altitude range even at the ground level. Based on a comprehensive analysis of the simulation results, we propose analytical functions that can predict the cosmic-ray neutron spectra for any location in the atmosphere at altitudes below 20 km, considering the influences of local geometries such as ground and aircraft on the spectra. The accuracy of the analytical functions was well verified by various experimental data.
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Dan; Ricciuto, Daniel M.; Walker, Anthony P.
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results inmore » a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. Here, the result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.« less
Lu, Dan; Ricciuto, Daniel M.; Walker, Anthony P.; ...
2017-09-27
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results inmore » a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. Here, the result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.« less
Assessment of Wind Datasets for Estimating Offshore Wind Energy along the Central California Coast
NASA Astrophysics Data System (ADS)
Wang, Y. H.; Walter, R. K.; Ruttenberg, B.; White, C.
2017-12-01
Offshore renewable energy along the central California coastline has gained significant interest in recent years. We present a comprehensive analysis of near-surface wind datasets available in this region to facilitate future estimates of wind power generation potential. The analyses are based on local NDBC buoys, satellite-based measurements (QuickSCAT and CCMP V2.0), reanalysis products (NARR and MERRA), and a regional climate model (WRF). There are substantial differences in the diurnal signal during different months among the various products (i.e., satellite-based, reanalysis, and modeled) relative to the local buoys. Moreover, the datasets tended to underestimate wind speed under light wind conditions and overestimate under strong wind conditions. In addition to point-to-point comparisons against local buoys, the spatial variations of bias and error in both the reanalysis products and WRF model data in this region were compared against satellite-based measurements. NARR's bias and root-mean-square-error were generally small in the study domain and decreased with distance from coastlines. Although its smaller spatial resolution is likely to be insufficient to reveal local effects, the small bias and error in near-surface winds, as well as the availability of wind data at the proposed turbine hub heights, suggests that NARR is an ideal candidate for use in offshore wind energy production estimates along the central California coast. The framework utilized here could be applied in other site-specific regions where offshore renewable energy is being considered.
Temporal information processing in short- and long-term memory of patients with schizophrenia.
Landgraf, Steffen; Steingen, Joerg; Eppert, Yvonne; Niedermeyer, Ulrich; van der Meer, Elke; Krueger, Frank
2011-01-01
Cognitive deficits of patients with schizophrenia have been largely recognized as core symptoms of the disorder. One neglected factor that contributes to these deficits is the comprehension of time. In the present study, we assessed temporal information processing and manipulation from short- and long-term memory in 34 patients with chronic schizophrenia and 34 matched healthy controls. On the short-term memory temporal-order reconstruction task, an incidental or intentional learning strategy was deployed. Patients showed worse overall performance than healthy controls. The intentional learning strategy led to dissociable performance improvement in both groups. Whereas healthy controls improved on a performance measure (serial organization), patients improved on an error measure (inappropriate semantic clustering) when using the intentional instead of the incidental learning strategy. On the long-term memory script-generation task, routine and non-routine events of everyday activities (e.g., buying groceries) had to be generated in either chronological or inverted temporal order. Patients were slower than controls at generating events in the chronological routine condition only. They also committed more sequencing and boundary errors in the inverted conditions. The number of irrelevant events was higher in patients in the chronological, non-routine condition. These results suggest that patients with schizophrenia imprecisely access temporal information from short- and long-term memory. In short-term memory, processing of temporal information led to a reduction in errors rather than, as was the case in healthy controls, to an improvement in temporal-order recall. When accessing temporal information from long-term memory, patients were slower and committed more sequencing, boundary, and intrusion errors. Together, these results suggest that time information can be accessed and processed only imprecisely by patients who provide evidence for impaired time comprehension. This could contribute to symptomatic cognitive deficits and strategic inefficiency in schizophrenia.
Daily QA of linear accelerators using only EPID and OBI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Baozhou, E-mail: bsun@radonc.wustl.edu; Goddu, S. Murty; Yaddanapudi, Sridhar
2015-10-15
Purpose: As treatment delivery becomes more complex, there is a pressing need for robust quality assurance (QA) tools to improve efficiency and comprehensiveness while simultaneously maintaining high accuracy and sensitivity. This work aims to present the hardware and software tools developed for comprehensive QA of linear accelerator (LINAC) using only electronic portal imaging devices (EPIDs) and kV flat panel detectors. Methods: A daily QA phantom, which includes two orthogonally positioned phantoms for QA of MV-beams and kV onboard imaging (OBI) is suspended from the gantry accessory holder to test both geometric and dosimetric components of a LINAC and an OBI.more » The MV component consists of a 0.5 cm water-equivalent plastic sheet incorporating 11 circular steel plugs for transmission measurements through multiple thicknesses and one resolution plug for MV-image quality testing. The kV-phantom consists of a Leeds phantom (TOR-18 FG phantom supplied by Varian) for testing low and high contrast resolutions. In the developed process, the existing LINAC tools were used to automate daily acquisition of MV and kV images and software tools were developed for simultaneous analysis of these images. A method was developed to derive and evaluate traditional QA parameters from these images [output, flatness, symmetry, uniformity, TPR{sub 20/10}, and positional accuracy of the jaws and multileaf collimators (MLCs)]. The EPID-based daily QA tools were validated by performing measurements on a detuned 6 MV beam to test its effectiveness in detecting errors in output, symmetry, energy, and MLC positions. The developed QA process was clinically commissioned, implemented, and evaluated on a Varian TrueBeam LINAC (Varian Medical System, Palo Alto, CA) over a period of three months. Results: Machine output constancy measured with an EPID (as compared against a calibrated ion-chamber) is shown to be within ±0.5%. Beam symmetry and flatness deviations measured using an EPID and a 2D ion-chamber array agree within ±0.5% and ±1.2% for crossline and inline profiles, respectively. MLC position errors of 0.5 mm can be detected using a picket fence test. The field size and phantom positioning accuracy can be determined within 0.5 mm. The entire daily QA process takes ∼15 min to perform tests for 5 photon beams, MLC tests, and imaging checks. Conclusions: The exclusive use of EPID-based QA tools, including a QA phantom and simultaneous analysis software tools, has been demonstrated as a viable, efficient, and comprehensive process for daily evaluation of LINAC performance.« less
Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne
2016-01-05
In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.
Ontology-based specification, identification and analysis of perioperative risks.
Uciteli, Alexandr; Neumann, Juliane; Tahar, Kais; Saleh, Kutaiba; Stucke, Stephan; Faulbrück-Röhr, Sebastian; Kaeding, André; Specht, Martin; Schmidt, Tobias; Neumuth, Thomas; Besting, Andreas; Stegemann, Dominik; Portheine, Frank; Herre, Heinrich
2017-09-06
Medical personnel in hospitals often works under great physical and mental strain. In medical decision-making, errors can never be completely ruled out. Several studies have shown that between 50 and 60% of adverse events could have been avoided through better organization, more attention or more effective security procedures. Critical situations especially arise during interdisciplinary collaboration and the use of complex medical technology, for example during surgical interventions and in perioperative settings (the period of time before, during and after surgical intervention). In this paper, we present an ontology and an ontology-based software system, which can identify risks across medical processes and supports the avoidance of errors in particular in the perioperative setting. We developed a practicable definition of the risk notion, which is easily understandable by the medical staff and is usable for the software tools. Based on this definition, we developed a Risk Identification Ontology (RIO) and used it for the specification and the identification of perioperative risks. An agent system was developed, which gathers risk-relevant data during the whole perioperative treatment process from various sources and provides it for risk identification and analysis in a centralized fashion. The results of such an analysis are provided to the medical personnel in form of context-sensitive hints and alerts. For the identification of the ontologically specified risks, we developed an ontology-based software module, called Ontology-based Risk Detector (OntoRiDe). About 20 risks relating to cochlear implantation (CI) have already been implemented. Comprehensive testing has indicated the correctness of the data acquisition, risk identification and analysis components, as well as the web-based visualization of results.
Chiu, Ming-Chuan; Hsieh, Min-Chih
2016-05-01
The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Monitoring and reporting of preanalytical errors in laboratory medicine: the UK situation.
Cornes, Michael P; Atherton, Jennifer; Pourmahram, Ghazaleh; Borthwick, Hazel; Kyle, Betty; West, Jamie; Costelloe, Seán J
2016-03-01
Most errors in the clinical laboratory occur in the preanalytical phase. This study aimed to comprehensively describe the prevalence and nature of preanalytical quality monitoring practices in UK clinical laboratories. A survey was sent on behalf of the Association for Clinical Biochemistry and Laboratory Medicine Preanalytical Working Group (ACB-WG-PA) to all heads of department of clinical laboratories in the UK. The survey captured data on the analytical platform and Laboratory Information Management System in use; which preanalytical errors were recorded and how they were classified and gauged interest in an external quality assurance scheme for preanalytical errors. Of the 157 laboratories asked to participate, responses were received from 104 (66.2%). Laboratory error rates were recorded per number of specimens, rather than per number of requests in 51% of respondents. Aside from serum indices for haemolysis, icterus and lipaemia, which were measured in 80% of laboratories, the most common errors recorded were booking-in errors (70.1%) and sample mislabelling (56.9%) in laboratories who record preanalytical errors. Of the laboratories surveyed, 95.9% expressed an interest in guidance on recording preanalytical error and 91.8% expressed interest in an external quality assurance scheme. This survey observes a wide variation in the definition, repertoire and collection methods for preanalytical errors in the UK. Data indicate there is a lot of interest in improving preanalytical data collection. The ACB-WG-PA aims to produce guidance and support for laboratories to standardize preanalytical data collection and to help establish and validate an external quality assurance scheme for interlaboratory comparison. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
NASA Astrophysics Data System (ADS)
Vuković, Josip; Kos, Tomislav
2017-10-01
The ionosphere introduces positioning error in Global Navigation Satellite Systems (GNSS). There are several approaches for minimizing the error, with various levels of accuracy and different extents of coverage area. To model the state of the ionosphere in a region containing low number of reference GNSS stations, a locally adapted NeQuick 2 model can be used. Data ingestion updates the model with local level of ionization, enabling it to follow the observed changes of ionization levels. The NeQuick 2 model was adapted to local reference Total Electron Content (TEC) data using single station approach and evaluated using calibrated TEC data derived from 41 testing GNSS stations distributed around the data ingestion point. Its performance was observed in European middle latitudes in different ionospheric conditions of the period between 2011 and 2015. The modelling accuracy was evaluated in four azimuthal quadrants, with coverage radii calculated for three error thresholds: 12, 6 and 3 TEC Units (TECU). Diurnal radii change was observed for groups of days within periods of low and high solar activity and different seasons of the year. The statistical analysis was conducted on those groups of days, revealing trends in each of the groups, similarities between days within groups and the 95th percentile radii as a practically applicable measure of model performance. In almost all cases the modelling accuracy was better than 12 TECU, having the biggest radius from the data ingestion point. Modelling accuracy better than 6 TECU was achieved within reduced radius in all observed periods, while accuracy better than 3 TECU was reached only in summer. The calculated radii and interpolated error levels were presented on maps. That was especially useful in analyzing the model performance during the strongest geomagnetic storms of the observed period, with each of them having unique development and influence on model accuracy. Although some of the storms severely degraded the model accuracy, during most of the disturbed periods the model could be used, but with lower accuracy than in the quiet geomagnetic conditions. The comprehensive analysis of locally adapted NeQuick 2 model performance highlighted the challenges of using the single point data ingestion applied to a large region in middle latitudes and determined the achievable radii for different error thresholds in various ionospheric conditions.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Rani, Padmaja Kumari; Raman, Rajiv; Rachapalli, Sudhir R; Kulothungan, Vaitheeswaran; Kumaramanickavel, Govindasamy; Sharma, Tarun
2010-06-01
To report the prevalence of refractive errors and the associated risk factors in subjects with type 2 diabetes mellitus from an urban Indian population. Population-based, cross-sectional study. One thousand eighty participants selected from a pool of 1414 subjects with diabetes. A population-based sample of 1414 persons (age >40 years) with diabetes (identified as per the World Health Organization criteria) underwent a comprehensive eye examination, including objective and subjective refractions. One thousand eighty subjects who were phakic in the right eye with best corrected visual acuity of > or =20/40 were included in the analysis for prevalence of refractive errors. Univariate and multivariate analyses were done to find out the independent risk factors associated with the refractive errors. The mean refraction was +0.20+/-1.72, and the Median, +0.25 diopters. The prevalence of emmetropia (spherical equivalent [SE], -0.50 to +0.50 diopter sphere [DS]) was 39.26%. The prevalence of myopia (SE <-0.50 DS), high myopia (SE <-5.00 DS), hyperopia (SE >+0.50 DS), and astigmatism (SE <-0.50 cyl) was 19.4%, 1.6%, 39.7%, and 47.4%, respectively. The advancing age was an important risk factor for the three refractive errors: for myopia, odds ratio (OR; 95% confidence interval [CI] 4.06 [1.74-9.50]; for hyperopia, OR [95% CI] 5.85 [2.56-13.39]; and for astigmatism, OR [95% CI] 2.51 [1.34-4.71]). Poor glycemic control was associated with myopia (OR [95% CI] 4.15 [1.44-11.92]) and astigmatism (OR [95% CI] 2.01 [1.04-3.88]). Female gender was associated with hyperopia alone) OR [95% CI] 2.00 [1.42-2.82]. The present population-based study from urban India noted a high prevalence of refractive errors (60%) among diabetic subjects >40 years old; the prevalence of astigmatism (47%) was higher than hyperopia (40%) or myopia (20%). Copyright 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hunziker, Stefan; Gubler, Stefanie; Calle, Juan; Moreno, Isabel; Andrade, Marcos; Velarde, Fernando; Ticona, Laura; Carrasco, Gualberto; Castellón, Yaruska; Oria Rojas, Clara; Brönnimann, Stefan; Croci-Maspoli, Mischa; Konzelmann, Thomas; Rohrer, Mario
2016-04-01
Assessing climatological trends and extreme events requires high-quality data. However, for many regions of the world, observational data of the desired quality is not available. In order to eliminate errors in the data, quality control (QC) should be applied before data analysis. If the data still contains undetected errors and quality problems after QC, a consequence may be misleading and erroneous results. A region which is seriously affected by observational data quality problems is the Central Andes. At the same time, climatological information on ongoing climate change and climate risks are of utmost importance in this area due to its vulnerability to meteorological extreme events and climatic changes. Beside data quality issues, the lack of metadata and the low station network density complicate quality control and assessment, and hence, appropriate application of the data. Errors and data problems may occur at any point of the data generation chain, e.g. due to unsuitable station configuration or siting, poor station maintenance, erroneous instrument reading, or inaccurate data digitalization and post processing. Different measurement conditions in the predominantly conventional station networks in Bolivia and Peru compared to the mostly automated networks e.g. in Europe or Northern America may cause different types of errors. Hence, applying QC methods used on state of the art networks to Bolivian and Peruvian climate observations may not be suitable or sufficient. A comprehensive amount of Bolivian and Peruvian maximum and minimum temperature and precipitation in-situ measurements were analyzed to detect and describe common data quality problems. Furthermore, station visits and reviews of the original documents were done. Some of the errors could be attributed to a specific source. Such information is of great importance for data users, since it allows them to decide for what applications the data still can be used. In ideal cases, it may even allow to correct the error. Strategies on how to deal with data from the Central Andes will be suggested. However, the approach may be applicable to networks from other countries where conditions of climate observations are comparable.
Vocabulary Learning through Picture-Viewing and Picture-Drawing on Tablets
ERIC Educational Resources Information Center
Ou, Kuo-Liang; Tarng, Wernhuar; Chen, Yi-Ru
2018-01-01
Beginning learners of English frequently use flashcards as a tool for learning vocabulary. However, because of the consciousness difference between the picture-readers and picture-drawers on vocabularies, errors may be involved in the learners' comprehension of the vocabulary terms on the flashcards. This article develops and evaluates an English…
Probabilistic accounting of uncertainty in forecasts of species distributions under climate change
Seth J. Wenger; Nicholas A. Som; Daniel C. Dauwalter; Daniel J. Isaak; Helen M. Neville; Charles H. Luce; Jason B. Dunham; Michael K. Young; Kurt D. Fausch; Bruce E. Rieman
2013-01-01
Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing...
77 FR 46439 - Medicare Program; Prior Authorization for Power Mobility Device (PMD) Demonstration
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-03
... DOJ. Medicare Fraud Strike Force teams are a key component of HEAT, since their inception and based on... primary focus of investigation for these strike forces. The Comprehensive Error Rate Testing (CERT... various prior authorization scenarios: Scenario 1: When a submitter sends a prior authorization request to...
Using Peer-Mediated Repeated Readings as a Fluency-Building Activity for Urban Learners
ERIC Educational Resources Information Center
Yurick, Amanda L.; Robinson, Porsha D.; Cartledge, Gwendolyn; Lo, Ya-yu; Evans, Trisha L.
2006-01-01
We conducted three experiments examining the effects of peer-mediated repeated readings on students' oral reading fluency and comprehension. Each repeated reading session consisted of students reading in pairs, alternating paragraphs, for 10 minutes. Students used a scripted correction procedure when errors occurred. Students then participated in…
Health and Wages: Panel Data Estimates Considering Selection and Endogeneity
ERIC Educational Resources Information Center
Jackle, Robert; Himmler, Oliver
2010-01-01
This paper complements previous studies on the effects of health on wages by addressing the problems of unobserved heterogeneity, sample selection, and endogeneity in one comprehensive framework. Using data from the German Socio-Economic Panel (GSOEP), we find the health variable to suffer from measurement error and a number of tests provide…
A Method of Reducing Random Drift in the Combined Signal of an Array of Inertial Sensors
2015-09-30
stability of the collective output, Bayard et al, US Patent 6,882,964. The prior art methods rely upon the use of Kalman filtering and averaging...including scale-factor errors, quantization effects, temperature effects, random drift, and additive noise. A comprehensive account of all of these
ERIC Educational Resources Information Center
Schweig, Jonathan
2013-01-01
Measuring school and classroom environments has become central in a nation-wide effort to develop comprehensive programs that measure teacher quality and teacher effectiveness. Formulating successful programs necessitates accurate and reliable methods for measuring these environmental variables. This paper uses a generalizability theory framework…
The Value of a Focused Approach to Written Corrective Feedback
ERIC Educational Resources Information Center
Bitchener, John; Knoch, Ute
2009-01-01
Investigations into the most effective ways to provide ESL learners with written corrective feedback have often been overly comprehensive in the range of error categories examined. As a result, clear conclusions about the efficacy of such feedback have not been possible. On the other hand, oral corrective feedback studies have produced clear,…
On the Equivalence of Constructed-Response and Multiple-Choice Tests.
ERIC Educational Resources Information Center
Traub, Ross E.; Fisher, Charles W.
Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with indentical content but different formats measure the same attribute, except for possible differences in error variance…
Comprehensive database of diameter-based biomass regressions for North American tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2004-01-01
A database consisting of 2,640 equations compiled from the literature for predicting the biomass of trees and tree components from diameter measurements of species found in North America. Bibliographic information, geographic locations, diameter limits, diameter and biomass units, equation forms, statistical errors, and coefficients are provided for each equation,...
A Reading Instruction Intervention Program for English-Language Learners Who Are Struggling Readers
ERIC Educational Resources Information Center
Tam, Kai Yung; Heward, William L.; Heng, Mary Anne
2006-01-01
We used a multiple baseline across students design to evaluate the effects of an intervention program consisting of vocabulary instruction, error correction, and fluency building on oral reading rate and comprehension of five English-language learners who were struggling readers in a primary school. During the first intervention condition (new…
Agreement Attraction in Comprehension: Representations and Processes
ERIC Educational Resources Information Center
Wagers, Matthew W.; Lau, Ellen F.; Phillips, Colin
2009-01-01
Much work has demonstrated so-called attraction errors in the production of subject-verb agreement (e.g., "The key to the cabinets are on the table", [Bock, J. K., & Miller, C. A. (1991). "Broken agreement." "Cognitive Psychology, 23", 45-93]), in which a verb erroneously agrees with an intervening noun. Six self-paced reading experiments examined…
A Survey of Progress in Coding Theory in the Soviet Union. Final Report.
ERIC Educational Resources Information Center
Kautz, William H.; Levitt, Karl N.
The results of a comprehensive technical survey of all published Soviet literature in coding theory and its applications--over 400 papers and books appearing before March 1967--are described in this report. Noteworthy Soviet contributions are discussed, including codes for the noiseless channel, codes that correct asymetric errors, decoding for…
Hsu, Nina S; Novick, Jared M
2016-04-01
Speech unfolds swiftly, yet listeners keep pace by rapidly assigning meaning to what they hear. Sometimes, though, initial interpretations turn out to be wrong. How do listeners revise misinterpretations of language input moment by moment to avoid comprehension errors? Cognitive control may play a role by detecting when processing has gone awry and then initiating behavioral adjustments accordingly. However, no research to date has investigated a cause-and-effect interplay between cognitive-control engagement and the overriding of erroneous interpretations in real time. Using a novel cross-task paradigm, we showed that Stroop-conflict detection, which mobilizes cognitive-control procedures, subsequently facilitates listeners' incremental processing of temporarily ambiguous spoken instructions that induce brief misinterpretation. When instructions followed incongruent Stroop items, compared with congruent Stroop items, listeners' eye movements to objects in a scene reflected more transient consideration of the false interpretation and earlier recovery of the correct one. Comprehension errors also decreased. Cognitive-control engagement therefore accelerates sentence-reinterpretation processes, even as linguistic input is still unfolding. © The Author(s) 2016.
Montazerhodjat, Vahid; Chaudhuri, Shomesh E; Sargent, Daniel J; Lo, Andrew W
2017-09-14
Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences.
Senol Cali, Damla; Kim, Jeremie S; Ghose, Saugata; Alkan, Can; Mutlu, Onur
2018-04-02
Nanopore sequencing technology has the potential to render other sequencing technologies obsolete with its ability to generate long reads and provide portability. However, high error rates of the technology pose a challenge while generating accurate genome assemblies. The tools used for nanopore sequence analysis are of critical importance, as they should overcome the high error rates of the technology. Our goal in this work is to comprehensively analyze current publicly available tools for nanopore sequence analysis to understand their advantages, disadvantages and performance bottlenecks. It is important to understand where the current tools do not perform well to develop better tools. To this end, we (1) analyze the multiple steps and the associated tools in the genome assembly pipeline using nanopore sequence data, and (2) provide guidelines for determining the appropriate tools for each step. Based on our analyses, we make four key observations: (1) the choice of the tool for basecalling plays a critical role in overcoming the high error rates of nanopore sequencing technology. (2) Read-to-read overlap finding tools, GraphMap and Minimap, perform similarly in terms of accuracy. However, Minimap has a lower memory usage, and it is faster than GraphMap. (3) There is a trade-off between accuracy and performance when deciding on the appropriate tool for the assembly step. The fast but less accurate assembler Miniasm can be used for quick initial assembly, and further polishing can be applied on top of it to increase the accuracy, which leads to faster overall assembly. (4) The state-of-the-art polishing tool, Racon, generates high-quality consensus sequences while providing a significant speedup over another polishing tool, Nanopolish. We analyze various combinations of different tools and expose the trade-offs between accuracy, performance, memory usage and scalability. We conclude that our observations can guide researchers and practitioners in making conscious and effective choices for each step of the genome assembly pipeline using nanopore sequence data. Also, with the help of bottlenecks we have found, developers can improve the current tools or build new ones that are both accurate and fast, to overcome the high error rates of the nanopore sequencing technology.
Montazerhodjat, Vahid; Chaudhuri, Shomesh E.; Sargent, Daniel J.
2017-01-01
Importance Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? Objective To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. Data Sources We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. Study Selection The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). Data Extraction and Synthesis The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Main Outcomes and Measures Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. Results We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Conclusions and Relevance Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences. PMID:28418507
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Field, Frank R; Wallington, Timothy J; Everson, Mark; Kirchain, Randolph E
2017-12-19
A comprehensive component-level assessment of several strategic and minor metals (SaMMs), including copper, manganese, magnesium, nickel, tin, niobium, light rare earth elements (LREEs; lanthanum, cerium, praseodymium, neodymium, promethium, and samarium), cobalt, silver, tungsten, heavy rare earth elements (yttrium, europium, gadolinium, terbium, dysprosium, holmium, erbium, thulium, ytterbium, and lutetium), and gold, use in the 2013 model year Ford Fiesta, Focus, Fusion, and F-150 is presented. Representative material contents in cars and light-duty trucks are estimated using comprehensive, component-level data reported by suppliers. Statistical methods are used to accommodate possible errors within the database and provide estimate bounds. Results indicate that there is a high degree of variability in SaMM use and that SaMMs are concentrated in electrical, drivetrain, and suspension subsystems. Results suggest that trucks contain greater amounts of aluminum, nickel, niobium, and silver and significantly greater amounts of magnesium, manganese, gold, and LREEs. We find tin and tungsten use in automobiles to be 3-5 times higher than reported by previous studies which have focused on automotive electronics. Automotive use of strategic and minor metals is substantial, with 2013 vehicle production in the United States, Canada, EU15, and Japan alone accounting for approximately 20% of global production of Mg and Ta and approximately 5% of Al, Cu, and Sn. The data and analysis provide researchers, recyclers, and decision-makers additional insight into the vehicle content of strategic and minor metals of current interest.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Al Ghamdi, Ebtisam; Yunus, Faisal; Da'ar, Omar; El-Metwally, Ashraf; Khalifa, Mohamed; Aldossari, Bakheet; Househ, Mowafa
2016-01-01
This research analyzes the impact of mobile phone screen size on user comprehension of health information and application structure. Applying experimental approach, we asked randomly selected users to read content and conduct tasks on a commonly used diabetes mobile application using three different mobile phone screen sizes. We timed and tracked a number of parameters, including correctness, effectiveness of completing tasks, content ease of reading, clarity of information organization, and comprehension. The impact of screen size on user comprehension/retention, clarity of information organization, and reading time were mixed. It is assumed on first glance that mobile screen size would affect all qualities of information reading and comprehension, including clarity of displayed information organization, reading time and user comprehension/retention of displayed information, but actually the screen size, in this experimental research, did not have significant impact on user comprehension/retention of the content or on understanding the application structure. However, it did have significant impact on clarity of information organization and reading time. Participants with larger screen size took shorter time reading the content with a significant difference in the ease of reading. While there was no significant difference in the comprehension of information or the application structures, there were a higher task completion rate and a lower number of errors with the bigger screen size. Screen size does not directly affect user comprehension of health information. However, it does affect clarity of information organization, reading time and user's ability to recall information.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
Anderson, Cheryl I; Nelson, Catherine S; Graham, Corey F; Mosher, Benjamin D; Gohil, Kartik N; Morrison, Chet A; Schneider, Paul D; Kepros, John P
2012-09-01
Performance improvement driven by the review of surgical morbidity and mortality is often limited to critiques of individual cases with a focus on individual errors. Little attention has been given to an analysis of why a decision seemed right at the time or to lower-level root causes. The application of scientific performance improvement has the potential to bring to light deeper levels of understanding of surgical decision-making, care processes, and physician psychology. A comprehensive retrospective chart review of previously discussed morbidity and mortality cases was performed with an attempt to identify areas where we could better understand or influence behavior or systems. We avoided focusing on traditional sources of human error such as lapses of vigilance or memory. An iterative process was used to refine the practical areas for possible intervention. Definitions were then created for the major categories and subcategories. Of a sample of 152 presented cases, the root cause for 96 (63%) patient-related events was identified as uni-factorial in origin, with 51 (34%) cases strictly related to patient disease with no other contributing causes. Fifty-six cases (37%) had multiple causes. The remaining 101 cases (66%) were categorized into two areas where the ability to influence outcomes appeared possible. Technical issues were found in 27 (18%) of these cases and 74 (74%) were related to disorganized care problems. Of the 74 cases identified with disorganized care, 42 (42%) were related to failures in critical thinking, 18 (18%) to undisciplined treatment strategies, 8 (8%) to structural failures, and 6 (6%) were related to failures in situational awareness. On a comprehensive review of cases presented at the morbidity and mortality conference, disorganized care played a large role in the cases presented and may have implications for future curriculum changes. The failure to think critically, to deliver disciplined treatment strategies, to recognize structural failures, and to achieve situational awareness contributed to the morbidities and mortalities. Future research may determine if focused training in these areas improves patient outcomes. Copyright © 2012 Elsevier Inc. All rights reserved.
Stragis, V B; Makarov, I Yu; Karelin, V V; Shevchuk, D Yu; Chechenin, E S
This article was designed to report the results of forensic medical, criminalistics, and comprehensive expertise of the subject who suffered from a non-perforating shotgun wound affecting the soft tissues and blood vessels in the femoral region. It was shown that only the scrutinous comprehensive full-scale expert examination of the injured site in the framework of forensic medical expertise makes it possible to exclude the probability of the expert error and formulate the reliable and substantiated conclusion as regards the fact and the conditions of the shotgun injury by a concrete type of the cartridge (e.g. having a caliber equal to 410/76 Stopper-2) with two spherical rubber bullets fired from a known weapon (Saiga-410S hunting carbine).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaing, Crystal; Vergez, Lisa; Hinckley, Aubree
2011-06-21
The objective of this project is to provide DHS a comprehensive evaluation of the current genomic technologies including genotyping, Taqman PCR, multiple locus variable tandem repeat analysis (MLVA), microarray and high-throughput DNA sequencing in the analysis of biothreat agents from complex environmental samples. As the result of a different DHS project, we have selected for and isolated a large number of ciprofloxacin resistant B. anthracis Sterne isolates. These isolates vary in the concentrations of ciprofloxacin that they can tolerate, suggesting multiple mutations in the samples. In collaboration with University of Houston, Eureka Genomics and Oak Ridge National Laboratory, we analyzedmore » the ciprofloxacin resistant B. anthracis Sterne isolates by microarray hybridization, Illumina and Roche 454 sequencing to understand the error rates and sensitivity of the different methods. The report provides an assessment of the results and a complete set of all protocols used and all data generated along with information to interpret the protocols and data sets.« less
Test of the Equivalence Principle in an Einstein Elevator
NASA Technical Reports Server (NTRS)
Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.
2005-01-01
This Annual Report illustrates the work carried out during the last grant-year activity on the Test of the Equivalence Principle in an Einstein Elevator. The activity focused on the following main topics: (1) analysis and conceptual design of a detector configuration suitable for the flight tests; (2) development of techniques for extracting a small signal from data strings with colored and white noise; (3) design of the mechanism that spins and releases the instrument package inside the cryostat; and (4) experimental activity carried out by our non-US partners (a summary is shown in this report). The analysis and conceptual design of the flight-detector (point 1) was focused on studying the response of the differential accelerometer during free fall, in the presence of errors and precession dynamics, for various detector's configurations. The goal was to devise a detector configuration in which an Equivalence Principle violation (EPV) signal at the sensitivity threshold level can be successfully measured and resolved out of a much stronger dynamics-related noise and gravity gradient. A detailed analysis and comprehensive simulation effort led us to a detector's design that can accomplish that goal successfully.
Coupled thermal-fluid-mechanics analysis of twin roll casting of A7075 aluminum alloy
NASA Astrophysics Data System (ADS)
Lee, Yun-Soo; Kim, Hyoung-Wook; Cho, Jae-Hyung; Chun, Se-Hwan
2017-09-01
Better understanding of temperature distribution and roll separation force during twin roll casting of aluminum alloys is critical to successfully fabricate good quality of aluminum strips. Therefore, the simulation techniques are widely applied to understand the twin roll casting process in a comprehensive way and to reduce the experimental time and cost of trial and error. However, most of the conventional approaches are considered thermally coupled flow, or thermally coupled mechanical behaviors. In this study, a fully coupled thermal-fluid-mechanical analysis of twin roll casting of A7075 aluminum strips was carried out using the finite element method. Temperature profile, liquid fraction and metal flow of aluminum strips with different thickness were predicted. Roll separation force and roll temperatures were experimentally obtained from a pilot-scale twin roll caster, and those results were compared with model predictions. Coupling the fluid of the liquid melt to the thermal and mechanical modeling reasonably predicted roll temperature distribution and roll separation force during twin roll casting.
Brief Alcohol Interventions for Adolescents and Young Adults: A Systematic Review and Meta-analysis
Tanner-Smith, Emily E.; Lipsey, Mark W.
2014-01-01
This study reports findings from a meta-analysis summarizing the effectiveness of brief alcohol interventions for adolescents (age 11-18) and young adults (age 19-30). We identified 185 eligible study samples using a comprehensive literature search and synthesized findings using random-effects meta-analyses with robust standard errors. Overall, brief alcohol interventions led to significant reductions in alcohol consumption and alcohol-related problems among adolescents (ḡ = 0.27 and ḡ = 0.19) and young adults (ḡ = 0.17 and ḡ = 0.11). These effects persisted for up to one year after intervention and did not vary across participant demographics, intervention length, or intervention format. However, certain intervention modalities (e.g., motivational interviewing) and components (e.g., decisional balance, goal-setting exercises) were associated with larger effects. We conclude that brief alcohol interventions yield beneficial effects on alcohol-related outcomes for adolescents and young adults that are modest but potentially worthwhile given their brevity and low cost. PMID:25300577
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
Pillay, Sara B.; Humphries, Colin J.; Gross, William L.; Graves, William W.; Book, Diane S.
2016-01-01
Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic ‘regularization’ errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as ‘played’), indicating over-reliance on sublexical grapheme–phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion–symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology. PMID:26966139
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
NASA Astrophysics Data System (ADS)
Lin, J.-T.; Liu, Z.; Zhang, Q.; Liu, H.; Mao, J.; Zhuang, G.
2012-12-01
Errors in chemical transport models (CTMs) interpreting the relation between space-retrieved tropospheric column densities of nitrogen dioxide (NO2) and emissions of nitrogen oxides (NOx) have important consequences on the inverse modeling. They are however difficult to quantify due to lack of adequate in situ measurements, particularly over China and other developing countries. This study proposes an alternate approach for model evaluation over East China, by analyzing the sensitivity of modeled NO2 columns to errors in meteorological and chemical parameters/processes important to the nitrogen abundance. As a demonstration, it evaluates the nested version of GEOS-Chem driven by the GEOS-5 meteorology and the INTEX-B anthropogenic emissions and used with retrievals from the Ozone Monitoring Instrument (OMI) to constrain emissions of NOx. The CTM has been used extensively for such applications. Errors are examined for a comprehensive set of meteorological and chemical parameters using measurements and/or uncertainty analysis based on current knowledge. Results are exploited then for sensitivity simulations perturbing the respective parameters, as the basis of the following post-model linearized and localized first-order modification. It is found that the model meteorology likely contains errors of various magnitudes in cloud optical depth, air temperature, water vapor, boundary layer height and many other parameters. Model errors also exist in gaseous and heterogeneous reactions, aerosol optical properties and emissions of non-nitrogen species affecting the nitrogen chemistry. Modifications accounting for quantified errors in 10 selected parameters increase the NO2 columns in most areas with an average positive impact of 18% in July and 8% in January, the most important factor being modified uptake of the hydroperoxyl radical (HO2) on aerosols. This suggests a possible systematic model bias such that the top-down emissions will be overestimated by the same magnitude if the model is used for emission inversion without corrections. The modifications however cannot eliminate the large model underestimates in cities and other extremely polluted areas (particularly in the north) as compared to satellite retrievals, likely pointing to underestimates of the a priori emission inventory in these places with important implications for understanding of atmospheric chemistry and air quality. Note that these modifications are simplified and should be interpreted with caution for error apportionment.
Addressing the unit of analysis in medical care studies: a systematic review.
Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G
2008-06-01
We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.
Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.
2015-01-01
Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799
Oldland, Alan R; Golightly, Larry K; May, Sondra K; Barber, Gerard R; Stolpman, Nancy M
2015-01-01
To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.
Random-effects meta-analysis: the number of studies matters.
Guolo, Annamaria; Varin, Cristiano
2017-06-01
This paper investigates the impact of the number of studies on meta-analysis and meta-regression within the random-effects model framework. It is frequently neglected that inference in random-effects models requires a substantial number of studies included in meta-analysis to guarantee reliable conclusions. Several authors warn about the risk of inaccurate results of the traditional DerSimonian and Laird approach especially in the common case of meta-analysis involving a limited number of studies. This paper presents a selection of likelihood and non-likelihood methods for inference in meta-analysis proposed to overcome the limitations of the DerSimonian and Laird procedure, with a focus on the effect of the number of studies. The applicability and the performance of the methods are investigated in terms of Type I error rates and empirical power to detect effects, according to scenarios of practical interest. Simulation studies and applications to real meta-analyses highlight that it is not possible to identify an approach uniformly superior to alternatives. The overall recommendation is to avoid the DerSimonian and Laird method when the number of meta-analysis studies is modest and prefer a more comprehensive procedure that compares alternative inferential approaches. R code for meta-analysis according to all of the inferential methods examined in the paper is provided.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
INVOLVEMENT OF MULTIPLE MOLECULAR PATHWAYS IN THE GENETICS OF OCULAR REFRACTION AND MYOPIA.
Wojciechowski, Robert; Cheng, Ching-Yu
2018-01-01
The prevalence of myopia has increased dramatically worldwide within the last three decades. Recent studies have shown that refractive development is influenced by environmental, behavioral, and inherited factors. This review aims to analyze recent progress in the genetics of refractive error and myopia. A comprehensive literature search of PubMed and OMIM was conducted to identify relevant articles in the genetics of refractive error. Genome-wide association and sequencing studies have increased our understanding of the genetics involved in refractive error. These studies have identified interesting candidate genes. All genetic loci discovered to date indicate that refractive development is a heterogeneous process mediated by a number of overlapping biological processes. The exact mechanisms by which these biological networks regulate eye growth are poorly understood. Although several individual genes and/or molecular pathways have been investigated in animal models, a systematic network-based approach in modeling human refractive development is necessary to understand the complex interplay between genes and environment in refractive error. New biomedical technologies and better-designed studies will continue to refine our understanding of the genetics and molecular pathways of refractive error, and may lead to preventative and therapeutic measures to combat the myopia epidemic.
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.
Influence of Tooth Spacing Error on Gears With and Without Profile Modifications
NASA Technical Reports Server (NTRS)
Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.
2000-01-01
A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.
ERIC Educational Resources Information Center
Kalahar, Kory G.
2011-01-01
Student failure is a prominent issue in many comprehensive secondary schools nationwide. Researchers studying error, reliability, and performance in organizations have developed and employed a method known as critical incident technique (CIT) for investigating failure. Adopting an action research model, this study involved gathering and analyzing…
Untrained users derail your caboose? Learn to get on track
NASA Technical Reports Server (NTRS)
Bentley-Smith, M.
2000-01-01
You've implemented Oracle, and low-and-behold, your users can't bear it. They're making costly errors, finding creative work-arounds, and being mavericks with your system, because user training wasn't on track. This paper discusses how to build a proactive, comprehensive training program to get your users back on track.
ERIC Educational Resources Information Center
Finesilver, Carla
2017-01-01
The move from additive to multiplicative thinking requires significant change in children's comprehension and manipulation of numerical relationships, involves various conceptual components, and can be a slow, multistage process for some. Unit arrays are a key visuospatial representation for supporting learning, but most research focuses on 2D…
ERIC Educational Resources Information Center
Hannah, C. Lynne; Shore, Bruce M.
1995-01-01
This study compared metacognitive performance of gifted, gifted learning-disabled, learning-disabled, and average males in grades 5 and 6 and grades 11 and 12. For metacognitive knowledge, skill on think-aloud error detection reading, and comprehension, the performance of gifted learning-disabled students resembled that of gifted students more…
ERIC Educational Resources Information Center
Landa, Katrina G.; Barbetta, Patricia M.
2017-01-01
A multiple probe across participants design was used to explore the effects of repeated readings on the reading fluency, errors, and comprehension of 4, third-to-fifth grade English language learners (ELLs) with specific learning disabilities (SLD). Also, generalization measures to untaught passages and maintenance data were collected. In…
ERIC Educational Resources Information Center
Ngafeeson, Madison N.
2013-01-01
The successful implementation of health information systems is expected to increase legibility, reduce medical errors, boost the quality of healthcare and shrink costs. Yet, evidence points to the fact that healthcare professionals resist the full use of these systems. Physicians and nurses have been reported to resist the system. Even though…
ERIC Educational Resources Information Center
Forbes, Bethany E.; Skinner, Christopher H.; Maurer, Kristin; Taylor, Emily; Schall, Megan; Cazzell, Samantha; Ciancio, Dennis; Conley, Matt; Conley, Elisha
2015-01-01
Working with middle-school students, we replicated and extended research on oral reading fluency (ORF) assessments and prompting students to read faster. Altering ORF administration procedures by instructing students to read fast caused statistically significant increases in their words correct per minute (WCPM) and errors, which was moderated by…
ERIC Educational Resources Information Center
Traynelis-Yurek, Elaine; Strong, Mary W.
2000-01-01
Examines the results of instruction in administering the Informal Reading Inventory (IRI) in three teacher training programs. Focuses on the examination of the scoring of the IRI in simulation exercises by preservice teachers after instruction in the administration and scoring of the IRI. Concludes that the preservice teachers did not accurately…
Contrasting Effects of Phonological Priming in Aphasic Word Production
ERIC Educational Resources Information Center
Wilshire, Carolyn E.; Saffran, Eleanor M.
2005-01-01
Two fluent aphasics, IG and GL, performed a phonological priming task in which they repeated an auditory prime then named a target picture. The two patients both had selective deficits in word production: they were at or near ceiling on lexical comprehension tasks, but were significantly impaired in picture naming. IG's naming errors included both…
ERIC Educational Resources Information Center
Heift, Trude; Schulze, Mathias
2012-01-01
This book provides the first comprehensive overview of theoretical issues, historical developments and current trends in ICALL (Intelligent Computer-Assisted Language Learning). It assumes a basic familiarity with Second Language Acquisition (SLA) theory and teaching, CALL and linguistics. It is of interest to upper undergraduate and/or graduate…
tPA Prescription and Administration Errors within a Regional Stroke System
Chung, Lee S; Tkach, Aleksander; Lingenfelter, Erin M; Dehoney, Sarah; Rollo, Jeannie; de Havenon, Adam; DeWitt, Lucy Dana; Grantz, Matthew Ryan; Wang, Haimei; Wold, Jana J; Hannon, Peter M; Weathered, Natalie R; Majersik, Jennifer J
2015-01-01
Background IV tPA utilization in acute ischemic stroke (AIS) requires weight-based dosing and a standardized infusion rate. In our regional network, we have tried to minimize tPA dosing errors. We describe the frequency and types of tPA administration errors made in our comprehensive stroke center (CSC) and at community hospitals (CHs) prior to transfer. Methods Using our stroke quality database, we extracted clinical and pharmacy information on all patients who received IV tPA from 2010–11 at the CSC or CH prior to transfer. All records were analyzed for the presence of inclusion/exclusion criteria deviations or tPA errors in prescription, reconstitution, dispensing, or administration, and analyzed for association with outcomes. Results We identified 131 AIS cases treated with IV tPA: 51% female; mean age 68; 32% treated at CSC, 68% at CH (including 26% by telestroke) from 22 CHs. tPA prescription and administration errors were present in 64% of all patients (41% CSC, 75% CH, p<0.001), the most common being incorrect dosage for body weight (19% CSC, 55% CH, p<0.001). Of the 27 overdoses, there were 3 deaths due to systemic hemorrhage or ICH. Nonetheless, outcomes (parenchymal hematoma, mortality, mRS) did not differ between CSC and CH patients nor between those with and without errors. Conclusion Despite focus on minimization of tPA administration errors in AIS patients, such errors were very common in our regional stroke system. Although an association between tPA errors and stroke outcomes was not demonstrated, quality assurance mechanisms are still necessary to reduce potentially dangerous, avoidable errors. PMID:26698642
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
The Influence of refractoriness upon comprehension of non-verbal auditory stimuli.
Crutch, Sebastian J; Warrington, Elizabeth K
2008-01-01
An investigation of non-verbal auditory comprehension in two patients with global aphasia following stroke is reported. The primary aim of the investigation was to establish whether refractory access disorders can affect non-verbal input modalities. All previous reports of refractoriness, a cognitive syndrome characterized by response inconsistency, sensitivity to temporal factors and insensitivity to item frequency, have involved comprehension tasks which have a verbal component. Two main experiments are described. The first consists of a novel sound-to-picture and sound-to-word matching task in which comprehension of environmental sounds is probed under conditions of semantic relatedness and semantic unrelatedness. In addition to the two stroke patients, the performance of a group of 10 control patients with non-vascular pathology is reported, along with evidence of semantic relatedness effects in sound comprehension. The second experiment examines environmental sound comprehension within a repetitive probing paradigm which affords assessment of the effects of semantic relatedness, response consistency and presentation rate. It is demonstrated that the two stroke patients show a significant increase in error rate across multiple probes of the same set of sound stimuli, indicating the presence of refractoriness within this non-verbal domain. The implications of the results are discussed with reference to our current understanding of the mechanisms of refractoriness.
Developing the Medication Reminder Mobile Application "Seeb".
Saghaeiannejad-Isfahani, Sakineh; Ehteshami, Asghar; Savari, Ebtesam; Samimi, Ali
2017-06-01
Today, the structure of comprehensive health care emphasizes self-care more than therapy. Medication therapy is a strong instrument for therapy received through the health setting, especially in medication area. Error in medication administration has produced different problems and they cost billions of dollars every year. Regarding mobile phone extensions, we developed a local medication reminder mobile application called "Seeb" as a suitable solution for decreasing medication errors for Iranians. We conducted a mixed methods study in three Phases: 1) Comparative study of existing mobile applications; 2) developed its object-oriented model; 3) Developed the initial version of "Seeb" that was approved for production. This application was designed for the appropriate medication administration including time and dosages through: recording patient and medication data; scheduling patients' medication; and reporting medication administration on progress. "Seeb" has been designed in compliance with Iranian health information technologists and pharmacists requirements. It is expected to reduce medication error and improve patient adherence to medical prescriptions.
Experimental Study on the Axis Line Deflection of Ti6A14V Titanium Alloy in Gun-Drilling Process
NASA Astrophysics Data System (ADS)
Li, Liang; Xue, Hu; Wu, Peng
2018-01-01
Titanium alloy is widely used in aerospace industry, but it is also a typical difficult-to-cut material. During Deep hole drilling of the shaft parts of a certain large aircraft, there are problems of bad surface roughness, chip control and axis deviation, so experiments on gun-drilling of Ti6A14V titanium alloy were carried out to measure the axis line deflection, diameter error and surface integrity, and the reasons of these errors were analyzed. Then, the optimized process parameter was obtained during gun-drilling of Ti6A14V titanium alloy with deep hole diameter of 17mm. Finally, we finished the deep hole drilling of 860mm while the comprehensive error is smaller than 0.2mm and the surface roughness is less than 1.6μm.
A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.
Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng
2016-05-01
In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.
Liu, Yan; Salvendy, Gavriel
2009-05-01
This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.
The Impact of Normalization Methods on RNA-Seq Data Analysis
Zyprych-Walczak, J.; Szabelska, A.; Handschuh, L.; Górczak, K.; Klamecka, K.; Figlerowicz, M.; Siatkowski, I.
2015-01-01
High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably. PMID:26176014
SimExTargId: A comprehensive package for real-time LC-MS data acquisition and analysis.
Edmands, William M B; Hayes, Josie; Rappaport, Stephen M
2018-05-22
Liquid chromatography mass spectrometry (LC-MS) is the favored method for untargeted metabolomic analysis of small molecules in biofluids. Here we present SimExTargId, an open-source R package for autonomous analysis of metabolomic data and real-time observation of experimental runs. This simultaneous, fully automated and multi-threaded (optional) package is a wrapper for vendor-independent format conversion (ProteoWizard), xcms- and CAMERA- based peak-picking, MetMSLine-based pre-processing and covariate-based statistical analysis. Users are notified of detrimental instrument drift or errors by email. Also included are two shiny applications, targetId for real-time MS2 target identification, and peakMonitor to monitor targeted metabolites. SimExTargId is publicly available under GNU LGPL v3.0 license at https://github.com/JosieLHayes/simExTargId, which includes a vignette with example data. SimExTargId should be installed on a dedicated data-processing workstation or server that is networked to the LC-MS platform to facilitate MS1 profiling of metabolomic data. josie.hayes@berkeley.edu. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Thompson, J. F.; Warsi, Z. U. A.; Mastin, C. W.
1982-01-01
A comprehensive review of methods of numerically generating curvilinear coordinate systems with coordinate lines coincident with all boundary segments is given. Some general mathematical framework and error analysis common to such coordinate systems is also included. The general categories of generating systems are those based on conformal mapping, orthogonal systems, nearly orthogonal systems, systems produced as the solution of elliptic and hyperbolic partial differential equations, and systems generated algebraically by interpolation among the boundaries. Also covered are the control of coordinate line spacing by functions embedded in the partial differential operators of the generating system and by subsequent stretching transformation. Dynamically adaptive coordinate systems, coupled with the physical solution, and time-dependent systems that follow moving boundaries are treated. References reporting experience using such coordinate systems are reviewed as well as those covering the system development.
Impact Assessment of GNSS Spoofing Attacks on INS/GNSS Integrated Navigation System.
Liu, Yang; Li, Sihai; Fu, Qiangwen; Liu, Zhenbo
2018-05-04
In the face of emerging Global Navigation Satellite System (GNSS) spoofing attacks, there is a need to give a comprehensive analysis on how the inertial navigation system (INS)/GNSS integrated navigation system responds to different kinds of spoofing attacks. A better understanding of the integrated navigation system’s behavior with spoofed GNSS measurements gives us valuable clues to develop effective spoofing defenses. This paper focuses on an impact assessment of GNSS spoofing attacks on the integrated navigation system Kalman filter’s error covariance, innovation sequence and inertial sensor bias estimation. A simple and straightforward measurement-level trajectory spoofing simulation framework is presented, serving as the basis for an impact assessment of both unsynchronized and synchronized spoofing attacks. Recommendations are given for spoofing detection and mitigation based on our findings in the impact assessment process.
Airborne gamma radiation soil moisture measurements over short flight lines
NASA Technical Reports Server (NTRS)
Peck, Eugene L.; Carrol, Thomas R.; Lipinski, Daniel M.
1990-01-01
Results are presented on airborne gamma radiation measurements of soil moisture condition, carried out along short flight lines as part of the First International Satellite Land Surface Climatology Project Field Experiment (FIFE). Data were collected over an area in Kansas during the summers of 1987 and 1989. The airborne surveys, together with ground measurements, provide the most comprehensive set of airborne and ground truth data available in the U.S. for calibrating and evaluating airborne gamma flight lines. Analysis showed that, using standard National Weather Service weights for the K, Tl, and Gc radiation windows, the airborne soil moisture estimates for the FIFE lines had a root mean square error of no greater than 3.0 percent soil moisture. The soil moisture estimates for sections having acquisition time of at least 15 sec were found to be reliable.
Multispectral Remote Sensing of the Earth and Environment Using KHawk Unmanned Aircraft Systems
NASA Astrophysics Data System (ADS)
Gowravaram, Saket
This thesis focuses on the development and testing of the KHawk multispectral remote sensing system for environmental and agricultural applications. KHawk Unmanned Aircraft System (UAS), a small and low-cost remote sensing platform, is used as the test bed for aerial video acquisition. An efficient image geotagging and photogrammetric procedure for aerial map generation is described, followed by a comprehensive error analysis on the generated maps. The developed procedure is also used for generation of multispectral aerial maps including red, near infrared (NIR) and colored infrared (CIR) maps. A robust Normalized Difference Vegetation index (NDVI) calibration procedure is proposed and validated by ground tests and KHawk flight test. Finally, the generated aerial maps and their corresponding Digital Elevation Models (DEMs) are used for typical application scenarios including prescribed fire monitoring, initial fire line estimation, and tree health monitoring.
The Infinitesimal Jackknife with Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.
2012-01-01
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A. F.; Jacobs, C. S.
2011-01-01
The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Error Analysis in Mathematics. Technical Report #1012
ERIC Educational Resources Information Center
Lai, Cheng-Fei
2012-01-01
Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Patient safety education to change medical students' attitudes and sense of responsibility.
Roh, Hyerin; Park, Seok Ju; Kim, Taekjoong
2015-01-01
This study examined changes in the perceptions and attitudes as well as the sense of individual and collective responsibility in medical students after they received patient safety education. A three-day patient safety curriculum was implemented for third-year medical students shortly before entering their clerkship. Before and after training, we administered a questionnaire, which was analysed quantitatively. Additionally, we asked students to answer questions about their expected behaviours in response to two case vignettes. Their answers were analysed qualitatively. There was improvement in students' concepts of patient safety after training. Before training, they showed good comprehension of the inevitability of error, but most students blamed individuals for errors and expressed a strong sense of individual responsibility. After training, students increasingly attributed errors to system dysfunction and reported more self-confidence in speaking up about colleagues' errors. However, due to the hierarchical culture, students still described difficulties communicating with senior doctors. Patient safety education effectively shifted students' attitudes towards systems-based thinking and increased their sense of collective responsibility. Strategies for improving superior-subordinate communication within a hierarchical culture should be added to the patient safety curriculum.
CPOE in Iran--a viable prospect? Physicians' opinions on using CPOE in an Iranian teaching hospital.
Kazemi, Alireza; Ellenius, Johan; Tofighi, Shahram; Salehi, Aref; Eghbalian, Fatemeh; Fors, Uno G
2009-03-01
In recent years, the theory that on-line clinical decision support systems can improve patients' safety among hospitalised individuals has gained greater acceptance. However, the feasibility of implementing such a system in a middle or low-income country has rarely been studied. Understanding the current prescription process and a proper needs assessment of prescribers can act as the key to successful implementation. The aim of this study was to explore physicians' opinions on the current prescription process, and the expected benefits and perceived obstacles to employ Computerised Physician Order Entry in an Iranian teaching hospital. Initially, the interview guideline was developed through focus group discussions with eight experts. Then semi-structured interviews were held with 19 prescribers. After verbatim transcription, inductive thematic analysis was performed on empirical data. Forty hours of on-looker observations were performed in different wards to explore the current prescription process. The current prescription process was identified as a physician-centred, top-down, model, where prescribers were found to mostly rely on their memories as well as being overconfident. Some errors may occur during different paper-based registrations, transcriptions and transfers. Physician opinions on Computerised Physician Order Entry were categorised into expected benefits and perceived obstacles. Confidentiality issues, reduction of medication errors and educational benefits were identified as three themes in the expected benefits category. High cost, social and cultural barriers, data entry time and problems with technical support emerged as four themes in the perceived obstacles category. The current prescription process has a high possibility of medication errors. Although there are different barriers confronting the implementation and continuation of Computerised Physician Order Entry in Iranian hospitals, physicians have a willingness to use them if these systems provide significant benefits. A pilot study in a limited setting and a comprehensive analysis of health outcomes and economic indicators should be performed, to assess the merits of introducing Computerised Physician Order Entry with decision support capabilities in Iran.
Matharoo, Manmeet; Haycock, Adam; Sevdalis, Nick; Thomas-Gibson, Siwan
2014-12-14
To investigate whether novel, non-technical skills training for Bowel Cancer Screening (BCS) endoscopy teams enhanced patient safety knowledge and attitudes. A novel endoscopy team training intervention for BCS teams was developed and evaluated as a pre-post intervention study. Four multi-disciplinary BCS teams constituting BCS endoscopist(s), specialist screening practitioners, endoscopy nurses and administrative staff (A) from English BCS training centres participated. No patients were involved in this study. Expert multidisciplinary faculty delivered a single day's training utilising real clinical examples. Pre and post-course evaluation comprised participants' patient safety awareness, attitudes, and knowledge. Global course evaluations were also collected. Twenty-three participants attended and their patient safety knowledge improved significantly from 43%-55% (P ≤ 0.001) following the training intervention. 12/41 (29%) of the safety attitudes items significantly improved in the areas of perceived patient safety knowledge and awareness. The remaining safety attitude items: perceived influence on patient safety, attitudes towards error management, error management actions and personal views following an error were unchanged following training. Both qualitative and quantitative global course evaluations were positive: 21/23 (91%) participants strongly agreed/agreed that they were satisfied with the course. Qualitative evaluation included mandating such training for endoscopy teams outside BCS and incorporating team training within wider endoscopy training. Limitations of the study include no measure of increased patient safety in clinical practice following training. A novel comprehensive training package addressing patient safety, non-technical skills and adverse event analysis was successful in improving multi-disciplinary teams' knowledge and safety attitudes.
Matharoo, Manmeet; Haycock, Adam; Sevdalis, Nick; Thomas-Gibson, Siwan
2014-01-01
AIM: To investigate whether novel, non-technical skills training for Bowel Cancer Screening (BCS) endoscopy teams enhanced patient safety knowledge and attitudes. METHODS: A novel endoscopy team training intervention for BCS teams was developed and evaluated as a pre-post intervention study. Four multi-disciplinary BCS teams constituting BCS endoscopist(s), specialist screening practitioners, endoscopy nurses and administrative staff (A) from English BCS training centres participated. No patients were involved in this study. Expert multidisciplinary faculty delivered a single day’s training utilising real clinical examples. Pre and post-course evaluation comprised participants’ patient safety awareness, attitudes, and knowledge. Global course evaluations were also collected. RESULTS: Twenty-three participants attended and their patient safety knowledge improved significantly from 43%-55% (P ≤ 0.001) following the training intervention. 12/41 (29%) of the safety attitudes items significantly improved in the areas of perceived patient safety knowledge and awareness. The remaining safety attitude items: perceived influence on patient safety, attitudes towards error management, error management actions and personal views following an error were unchanged following training. Both qualitative and quantitative global course evaluations were positive: 21/23 (91%) participants strongly agreed/agreed that they were satisfied with the course. Qualitative evaluation included mandating such training for endoscopy teams outside BCS and incorporating team training within wider endoscopy training. Limitations of the study include no measure of increased patient safety in clinical practice following training. CONCLUSION: A novel comprehensive training package addressing patient safety, non-technical skills and adverse event analysis was successful in improving multi-disciplinary teams’ knowledge and safety attitudes. PMID:25516665
Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C
2017-02-15
Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Noel, Camille E; Gutti, Veerarajesh; Bosch, Walter; Mutic, Sasa; Ford, Eric; Terezakis, Stephanie; Santanam, Lakshmi
2014-04-01
To quantify the potential impact of the Integrating the Healthcare Enterprise-Radiation Oncology Quality Assurance with Plan Veto (QAPV) on patient safety of external beam radiation therapy (RT) operations. An institutional database of events (errors and near-misses) was used to evaluate the ability of QAPV to prevent clinically observed events. We analyzed reported events that were related to Digital Imaging and Communications in Medicine RT plan parameter inconsistencies between the intended treatment (on the treatment planning system) and the delivered treatment (on the treatment machine). Critical Digital Imaging and Communications in Medicine RT plan parameters were identified. Each event was scored for importance using the Failure Mode and Effects Analysis methodology. Potential error occurrence (frequency) was derived according to the collected event data, along with the potential event severity, and the probability of detection with and without the theoretical implementation of the QAPV plan comparison check. Failure Mode and Effects Analysis Risk Priority Numbers (RPNs) with and without QAPV were compared to quantify the potential benefit of clinical implementation of QAPV. The implementation of QAPV could reduce the RPN values for 15 of 22 (71%) of evaluated parameters, with an overall average reduction in RPN of 68 (range, 0-216). For the 6 high-risk parameters (>200), the average reduction in RPN value was 163 (range, 108-216). The RPN value reduction for the intermediate-risk (200 > RPN > 100) parameters was (0-140). With QAPV, the largest RPN value for "Beam Meterset" was reduced from 324 to 108. The maximum reduction in RPN value was for Beam Meterset (216, 66.7%), whereas the maximum percentage reduction was for Cumulative Meterset Weight (80, 88.9%). This analysis quantifies the value of the Integrating the Healthcare Enterprise-Radiation Oncology QAPV implementation in clinical workflow. We demonstrate that although QAPV does not provide a comprehensive solution for error prevention in RT, it can have a significant impact on a subset of the most severe clinically observed events. Copyright © 2014 Elsevier Inc. All rights reserved.
Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses
NASA Astrophysics Data System (ADS)
Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong
2017-04-01
Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.
Guerlain, Stephanie; Adams, Reid B; Turrentine, F Beth; Shin, Thomas; Guo, Hui; Collins, Stephen R; Calland, J Forrest
2005-01-01
The objective of this research was to develop a digital system to archive the complete operative environment along with the assessment tools for analysis of this data, allowing prospective studies of operative performance, intraoperative errors, team performance, and communication. Ability to study this environment will yield new insights, allowing design of systems to avoid preventable errors that contribute to perioperative complications. A multitrack, synchronized, digital audio-visual recording system (RATE tool) was developed to monitor intraoperative performance, including software to synchronize data and allow assignment of independent observational scores. Cases were scored for technical performance, participants' situational awareness (knowledge of critical information), and their comfort and satisfaction with the conduct of the procedure. Laparoscopic cholecystectomy (n = 10) was studied. Technical performance of the RATE tool was excellent. The RATE tool allowed real time, multitrack data collection of all aspects of the operative environment, while permitting digital recording of the objective assessment data in a time synchronized and annotated fashion during the procedure. The mean technical performance score was 73% +/- 28% of maximum (perfect) performance. Situational awareness varied widely among team members, with the attending surgeon typically the only team member having comprehensive knowledge of critical case information. The RATE tool allows prospective analysis of performance measures such as technical judgments, team performance, and communication patterns, offers the opportunity to conduct prospective intraoperative studies of human performance, and allows for postoperative discussion, review, and teaching. This study also suggests that gaps in situational awareness might be an underappreciated source of operative adverse events. Future uses of this system will aid teaching, failure or adverse event analysis, and intervention research.
Richman, Susan D; Fairley, Jennifer; Butler, Rachel; Deans, Zandra C
2017-12-01
Evidence strongly indicates that extended RAS testing should be undertaken in mCRC patients, prior to prescribing anti-EGFR therapies. With more laboratories implementing testing, the requirement for External Quality Assurance schemes increases, thus ensuring high standards of molecular analysis. Data was analysed from 15 United Kingdom National External Quality Assessment Service (UK NEQAS) for Molecular Genetics Colorectal cancer external quality assurance (EQA) schemes, delivered between 2009 and 2016. Laboratories were provided annually with nine colorectal tumour samples for genotyping. Information on methodology and extent of testing coverage was requested, and scores given for genotyping, interpretation and clerical accuracy. There has been a sixfold increase in laboratory participation (18 in 2009 to 108 in 2016). For RAS genotyping, fewer laboratories now use Roche cobas®, pyrosequencing and Sanger sequencing, with more moving to next generation sequencing (NGS). NGS is the most commonly employed technology for BRAF and PIK3CA mutation screening. KRAS genotyping errors were seen in ≤10% laboratories, until the 2014-2015 scheme, when there was an increase to 16.7%, corresponding to a large increase in scheme participants. NRAS genotyping errors peaked at 25.6% in the first 2015-2016 scheme but subsequently dropped to below 5%. Interpretation and clerical accuracy scores have been consistently good throughout. Within this EQA scheme, we have observed that the quality of molecular analysis for colorectal cancer has continued to improve, despite changes in the required targets, the volume of testing and the technologies employed. It is reassuring to know that laboratories clearly recognise the importance of participating in EQA schemes.
Eye laterality: a comprehensive analysis in refractive surgery candidates.
Linke, Stephan J; Druchkiv, Vasyl; Steinberg, Johannes; Richard, Gisbert; Katz, Toam
2013-08-01
To explore eye laterality (higher refractive error in one eye) and its association with refractive state, spherical/astigmatic anisometropia, age and sex in refractive surgery candidates. Medical records of 12 493 consecutive refractive surgery candidates were filtered. Refractive error (subjective and cycloplegic) was measured in each subject and correlated with eye laterality. Only subjects with corrected distance visual acuity (CDVA) of >20/22 in each eye were enrolled to exclude amblyopia. Associations between eye laterality and refractive state were analysed by means of t-test, chi-squared test, Spearman's correlation and multivariate logistic regression analysis, respectively. There was no statistically significant difference in spherical equivalent between right (-3.47 ± 2.76 D) and left eyes (-3.47 ± 2.76 D, p = 0.510; Pearson's r = 0.948, p < 0.001). Subgroup analysis revealed (I) right eye laterality for anisometropia >2.5 D in myopic (-5.64 ± 2.5 D versus -4.92 ± 2.6 D; p = 0.001) and in hyperopic (4.44 ± 1.69 D versus 3.04 ± 1.79 D; p = 0.025) subjects, (II) a tendency for left eye cylindrical laterality in myopic subjects, and (III) myopic male subjects had a higher prevalence of left eye laterality. (IV) Age did not show any significant impact on laterality. Over the full refractive spectrum, this study confirmed previously described strong interocular refractive correlation but revealed a statistically significant higher rate of right eye laterality for anisometropia >2.5 D. In general, our results support the use of data from one eye only in studies of ocular refraction. © 2013 The Authors. Acta Ophthalmologica © 2013 Acta Ophthalmologica Scandinavica Foundation.
Collins, N J; Prinsen, C A C; Christensen, R; Bartels, E M; Terwee, C B; Roos, E M
2016-08-01
To conduct a systematic review and meta-analysis to synthesize evidence regarding measurement properties of the Knee injury and Osteoarthritis Outcome Score (KOOS). A comprehensive literature search identified 37 eligible papers evaluating KOOS measurement properties in participants with knee injuries and/or osteoarthritis (OA). Methodological quality was evaluated using the COSMIN checklist. Where possible, meta-analysis of extracted data was conducted for all studies and stratified by age and knee condition; otherwise narrative synthesis was performed. KOOS has adequate internal consistency, test-retest reliability and construct validity in young and old adults with knee injuries and/or OA. The ADL subscale has better content validity for older patients and Sport/Rec for younger patients with knee injuries, while the Pain subscale is more relevant for painful knee conditions. The five-factor structure of the original KOOS is unclear. There is some evidence that the KOOS subscales demonstrate sufficient unidimensionality, but this requires confirmation. Although measurement error requires further evaluation, the minimal detectable change for KOOS subscales ranges from 14.3 to 19.6 for younger individuals, and ≥20 for older individuals. Evidence of responsiveness comes from larger effect sizes following surgical (especially total knee replacement) than non-surgical interventions. KOOS demonstrates adequate content validity, internal consistency, test-retest reliability, construct validity and responsiveness for age- and condition-relevant subscales. Structural validity, cross-cultural validity and measurement error require further evaluation, as well as construct validity of KOOS Physical function Short form. Suggested order of subscales for different knee conditions can be applied in hierarchical testing of endpoints in clinical trials. PROSPERO (CRD42011001603). Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Detailed modeling of the statistical uncertainty of Thomson scattering measurements
NASA Astrophysics Data System (ADS)
Morton, L. A.; Parke, E.; Den Hartog, D. J.
2013-11-01
The uncertainty of electron density and temperature fluctuation measurements is determined by statistical uncertainty introduced by multiple noise sources. In order to quantify these uncertainties precisely, a simple but comprehensive model was made of the noise sources in the MST Thomson scattering system and of the resulting variance in the integrated scattered signals. The model agrees well with experimental and simulated results. The signal uncertainties are then used by our existing Bayesian analysis routine to find the most likely electron temperature and density, with confidence intervals. In the model, photonic noise from scattered light and plasma background light is multiplied by the noise enhancement factor (F) of the avalanche photodiode (APD). Electronic noise from the amplifier and digitizer is added. The amplifier response function shapes the signal and induces correlation in the noise. The data analysis routine fits a characteristic pulse to the digitized signals from the amplifier, giving the integrated scattered signals. A finite digitization rate loses information and can cause numerical integration error. We find a formula for the variance of the scattered signals in terms of the background and pulse amplitudes, and three calibration constants. The constants are measured easily under operating conditions, resulting in accurate estimation of the scattered signals' uncertainty. We measure F ≈ 3 for our APDs, in agreement with other measurements for similar APDs. This value is wavelength-independent, simplifying analysis. The correlated noise we observe is reproduced well using a Gaussian response function. Numerical integration error can be made negligible by using an interpolated characteristic pulse, allowing digitization rates as low as the detector bandwidth. The effect of background noise is also determined.
Raugei, Marco; Sgouridis, Sgouris; Murphy, David; ...
2017-01-01
A recent paper by Ferroni and Hopkirk (2016) asserts that the ERoEI (also referred to as EROI) of photovoltaic (PV) systems is so low that they actually act as net energy sinks, rather than delivering energy to society. Such claim, if accurate, would call into question many energy investment decisions. In the same paper, a comparison is also drawn between PV and nuclear electricity. We have carefully analysed this paper, and found methodological inconsistencies and calculation errors that, in combination, render its conclusions not scientifically sound. Ferroni and Hopkirk adopt 'extended' boundaries for their analysis of PV without acknowledging thatmore » such choice of boundaries makes their results incompatible with those for all other technologies that have been analysed using more conventional boundaries, including nuclear energy with which the authors engage in multiple inconsistent comparisons. In addition, they use out-dated information, make invalid assumptions on PV specifications and other key parameters, and conduct calculation errors, including double counting. Here in this paper, we provide revised EROI calculations for PV electricity in Switzerland, adopting both conventional and 'extended' system boundaries, to contrast with their results, which points to an order-of-magnitude underestimate of the EROI of PV in Switzerland by Ferroni and Hopkirk.« less
A framework for reporting on human factor/usability studies of health information technologies.
Peute, Linda W; Driest, Keiko F; Marcilly, Romaric; Bras Da Costa, Sabrina; Beuscart-Zephir, Marie-Catherine; Jaspers, Monique W M
2013-01-01
Increasingly, studies are being published on the potential negative effect of introducing poor designed Health Information Technology (HIT) into clinical settings, relating to technology-induced errors and adverse events. Academic research on HIT design and evaluation is an extremely important source of information in providing new insights into factors contributing to successful system (re)design efforts, system user-friendliness and usability issues and safety critical aspects of HIT design. However, these studies have been inconsistent and incomprehensive in their reporting, complicating the appraisal of outcomes, generalizability of study findings, meta-analysis and harmonization of the available evidence. To improve identification of type of use errors and safety related issues regarding design and implementation of HIT, consensus on issues to be reported on in scientific publications is a necessary step forward. This study presents the first approach to a framework providing a set of principles to follow for comprehensive and unambiguous reporting of HIT design and usability evaluation studies with the objective to reduce variation, improve on the publication reporting quality and proper indexation of these studies. This framework may be helpful in expanding the knowledge base not only concerning the application of Human Factors (HF)/Usability studies of HIT but also improve the knowledge base of how to (re)design and implement effective, efficient and safe HIT.
Palta, Jatinder R; Liu, Chihray; Li, Jonathan G
2008-01-01
The traditional prescriptive quality assurance (QA) programs that attempt to ensure the safety and reliability of traditional external beam radiation therapy are limited in their applicability to such advanced radiation therapy techniques as three-dimensional conformal radiation therapy, intensity-modulated radiation therapy, inverse treatment planning, stereotactic radiosurgery/radiotherapy, and image-guided radiation therapy. The conventional QA paradigm, illustrated by the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 40 (TG-40) report, consists of developing a consensus menu of tests and device performance specifications from a generic process model that is assumed to apply to all clinical applications of the device. The complexity, variation in practice patterns, and level of automation of high-technology radiotherapy renders this "one-size-fits-all" prescriptive QA paradigm ineffective or cost prohibitive if the high-probability error pathways of all possible clinical applications of the device are to be covered. The current approaches to developing comprehensive prescriptive QA protocols can be prohibitively time consuming and cost ineffective and may sometimes fail to adequately safeguard patients. It therefore is important to evaluate more formal error mitigation and process analysis methods of industrial engineering to more optimally focus available QA resources on process components that have a significant likelihood of compromising patient safety or treatment outcomes.
Exome Sequence Analysis of 14 Families With High Myopia.
Kloss, Bethany A; Tompson, Stuart W; Whisenhunt, Kristina N; Quow, Krystina L; Huang, Samuel J; Pavelec, Derek M; Rosenberg, Thomas; Young, Terri L
2017-04-01
To identify causal gene mutations in 14 families with autosomal dominant (AD) high myopia using exome sequencing. Select individuals from 14 large Caucasian families with high myopia were exome sequenced. Gene variants were filtered to identify potential pathogenic changes. Sanger sequencing was used to confirm variants in original DNA, and to test for disease cosegregation in additional family members. Candidate genes and chromosomal loci previously associated with myopic refractive error and its endophenotypes were comprehensively screened. In 14 high myopia families, we identified 73 rare and 31 novel gene variants as candidates for pathogenicity. In seven of these families, two of the novel and eight of the rare variants were within known myopia loci. A total of 104 heterozygous nonsynonymous rare variants in 104 genes were identified in 10 out of 14 probands. Each variant cosegregated with affection status. No rare variants were identified in genes known to cause myopia or in genes closest to published genome-wide association study association signals for refractive error or its endophenotypes. Whole exome sequencing was performed to determine gene variants implicated in the pathogenesis of AD high myopia. This study provides new genes for consideration in the pathogenesis of high myopia, and may aid in the development of genetic profiling of those at greatest risk for attendant ocular morbidities of this disorder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raugei, Marco; Sgouridis, Sgouris; Murphy, David
A recent paper by Ferroni and Hopkirk (2016) asserts that the ERoEI (also referred to as EROI) of photovoltaic (PV) systems is so low that they actually act as net energy sinks, rather than delivering energy to society. Such claim, if accurate, would call into question many energy investment decisions. In the same paper, a comparison is also drawn between PV and nuclear electricity. We have carefully analysed this paper, and found methodological inconsistencies and calculation errors that, in combination, render its conclusions not scientifically sound. Ferroni and Hopkirk adopt 'extended' boundaries for their analysis of PV without acknowledging thatmore » such choice of boundaries makes their results incompatible with those for all other technologies that have been analysed using more conventional boundaries, including nuclear energy with which the authors engage in multiple inconsistent comparisons. In addition, they use out-dated information, make invalid assumptions on PV specifications and other key parameters, and conduct calculation errors, including double counting. Here in this paper, we provide revised EROI calculations for PV electricity in Switzerland, adopting both conventional and 'extended' system boundaries, to contrast with their results, which points to an order-of-magnitude underestimate of the EROI of PV in Switzerland by Ferroni and Hopkirk.« less
NASA Technical Reports Server (NTRS)
Langel, Robert A.; Sabaka, T. J.; Baldwin, R. T.
1991-01-01
Two suites of geomagnetic field models were generated at the request of Los Alamos National Lab. concerning Strategic Defense Initiative (SDI) research. The first is a progression of five models incorporating MAGSAT data and data from a sequence of batches as a priori information. The batch sequence is: post 1979.5 observatory data, post 1980 land survey and selected aeromagnetic and marine survey data, a special White Sands (NM) area survey by Project Magnet with some additional post 1980 marine survey data, and finally DE-2 satellite data. These models are of 13th deg and order in their main field terms, and deg and order 10 in their first derivative temporal terms. The second suite consists of four models based solely upon post 1983.5 observatory and survey data. They are of deg and order 10 in main field and 8 in a first deg Taylor series. A comprehensive error analysis was applied to both series, which accounted for error sources such as the truncated core and crustal fields, and the neglected Sq and low deg crustal fields. Comparison of the power spectrum of the MGST (10/81) model with those of this series show good agreement.
Salem, Rany M; Wessel, Jennifer; Schork, Nicholas J
2005-03-01
Interest in the assignment and frequency analysis of haplotypes in samples of unrelated individuals has increased immeasurably as a result of the emphasis placed on haplotype analyses by, for example, the International HapMap Project and related initiatives. Although there are many available computer programs for haplotype analysis applicable to samples of unrelated individuals, many of these programs have limitations and/or very specific uses. In this paper, the key features of available haplotype analysis software for use with unrelated individuals, as well as pooled DNA samples from unrelated individuals, are summarised. Programs for haplotype analysis were identified through keyword searches on PUBMED and various internet search engines, a review of citations from retrieved papers and personal communications, up to June 2004. Priority was given to functioning computer programs, rather than theoretical models and methods. The available software was considered in light of a number of factors: the algorithm(s) used, algorithm accuracy, assumptions, the accommodation of genotyping error, implementation of hypothesis testing, handling of missing data, software characteristics and web-based implementations. Review papers comparing specific methods and programs are also summarised. Forty-six haplotyping programs were identified and reviewed. The programs were divided into two groups: those designed for individual genotype data (a total of 43 programs) and those designed for use with pooled DNA samples (a total of three programs). The accuracy of programs using various criteria are assessed and the programs are categorised and discussed in light of: algorithm and method, accuracy, assumptions, genotyping error, hypothesis testing, missing data, software characteristics and web implementation. Many available programs have limitations (eg some cannot accommodate missing data) and/or are designed with specific tasks in mind (eg estimating haplotype frequencies rather than assigning most likely haplotypes to individuals). It is concluded that the selection of an appropriate haplotyping program for analysis purposes should be guided by what is known about the accuracy of estimation, as well as by the limitations and assumptions built into a program.
Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Munoz, Cesar A.
2007-01-01
This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu
2009-02-01
The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.
Error Analysis: Past, Present, and Future
ERIC Educational Resources Information Center
McCloskey, George
2017-01-01
This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
NASA Technical Reports Server (NTRS)
Levy, G.; Brown, R. A.
1986-01-01
A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.
Teaching and Learning "False Friends": A Review of Some Useful Resources
ERIC Educational Resources Information Center
Varela, Maria Luisa Roca
2011-01-01
False friends are words in two languages that are similar in form but different in meaning (e.g. English "library" "place for reading and borrowing books" vs Spanish "libreria" "bookshop"). From the point of view of EFL teaching and learning, false friends are important because they lead us to errors in L2 production and comprehension (e.g. "I am…
Error Monitoring in Speech Production: A Computational Test of the Perceptual Loop Theory.
ERIC Educational Resources Information Center
Hartsuiker, Robert J.; Kolk, Herman H. J.
2001-01-01
Tested whether an elaborated version of the perceptual loop theory (W. Levelt, 1983) and the main interruption rule was consistent with existing time course data (E. Blackmer and E. Mitton, 1991; C. Oomen and A. Postma, in press). The study suggests that including an inner loop through the speech comprehension system generates predictions that fit…
ERIC Educational Resources Information Center
Alarcon, Irma V.
2011-01-01
The present study explores knowledge of Spanish grammatical gender in both comprehension and production by heritage language speakers and second language (L2) learners, with native Spanish speakers as a baseline. Most L2 research has tended to interpret morphosyntactic variability in interlanguage production, such as errors in gender agreement, as…
ERIC Educational Resources Information Center
Wallace, Jennifer N.
2013-01-01
As education law evolves, educators are faced with difficult decisions regarding curriculum, prevention programs, and intervention strategies to use with their students. The use of evidence-based strategies for all academic skill areas, including reading, has become increasingly common in schools. Twenty-four 4th grade students participated in an…
Some personal observations on cultivating the Heliamphora
Robert R. Ziemer
1979-01-01
The following note is based on some 7 years experience growing three species of Heliamphora - H. heterodoxa, H. nutans, and H. minor. This information is not intended to be a definitive or even a comprehensive guide to the cultivation of these species, but simply some observations on what I have found to work for me through trial and error. I have not conducted...
ERIC Educational Resources Information Center
Escarpio, Raul; Barbetta, Patricia M.
2016-01-01
This study used an alternating treatments design to compare the effects of three conditions on the reading fluency, errors, and comprehension of four, sixth-grade students with emotional and behavioral disorders (EBD) who were struggling readers. The conditions were (a) repeated readings in which participants read three times a passage of 100 or…
Reading Ability in Ethiopian Learners of Hebrew: How Important Is Phonemic Awareness?
ERIC Educational Resources Information Center
Abu-Rabia, Salim
2004-01-01
The study investigated reading errors made by Ethiopian learners of Hebrew (n=34). These newcomers to Israel, unlike other groups such as the Russian Jews, typically have low literacy skills in their first language (Amharic). Their ability to read Hebrew, as judged on a reading comprehension test, was still poor after living in Israel for seven…
ERIC Educational Resources Information Center
Gelman, Susan A.; Croft, William; Fu, Panfang; Clausner, Timothy; Gottfried, Gail
1998-01-01
Examined how object shape, taxonomic relatedness, and prior lexical knowledge influenced children's overextensions (e.g., referring to pomegranates as apples). Researchers presented items that disentangled the three factors and used a novel comprehension task where children could indicate negative exemplars. Error patterns differed by task and by…
Analytic study of the Tadoma method: background and preliminary results.
Norton, S J; Schultz, M C; Reed, C M; Braida, L D; Durlach, N I; Rabinowitz, W M; Chomsky, C
1977-09-01
Certain deaf-blind persons have been taught, through the Tadoma method of speechreading, to use vibrotactile cues from the face and neck to understand speech. This paper reports the results of preliminary tests of the speechreading ability of one adult Tadoma user. The tests were of four major types: (1) discrimination of speech stimuli; (2) recognition of words in isolation and in sentences; (3) interpretation of prosodic and syntactic features in sentences; and (4) comprehension of written (Braille) and oral speech. Words in highly contextual environments were much better perceived than were words in low-context environments. Many of the word errors involved phonemic substitutions which shared articulatory features with the target phonemes, with a higher error rate for vowels than consonants. Relative to performance on word-recognition tests, performance on some of the discrimination tests was worse than expected. Perception of sentences appeared to be mildly sensitive to rate of talking and to speaker differences. Results of the tests on perception of prosodic and syntactic features, while inconclusive, indicate that many of the features tested were not used in interpreting sentences. On an English comprehension test, a higher score was obtained for items administered in Braille than through oral presentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schimpe, Michael; von Kuepach, M. E.; Naumann, M.
For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less
Schimpe, Michael; von Kuepach, M. E.; Naumann, M.; ...
2018-01-12
For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less
Effects of irrelevant sounds on phonological coding in reading comprehension and short-term memory.
Boyle, R; Coltheart, V
1996-05-01
The effects of irrelevant sounds on reading comprehension and short-term memory were studied in two experiments. In Experiment 1, adults judged the acceptability of written sentences during irrelevant speech, accompanied and unaccompanied singing, instrumental music, and in silence. Sentences varied in syntactic complexity: Simple sentences contained a right-branching relative clause (The applause pleased the woman that gave the speech) and syntactically complex sentences included a centre-embedded relative clause (The hay that the farmer stored fed the hungry animals). Unacceptable sentences either sounded acceptable (The dog chased the cat that eight up all his food) or did not (The man praised the child that sight up his spinach). Decision accuracy was impaired by syntactic complexity but not by irrelevant sounds. Phonological coding was indicated by increased errors on unacceptable sentences that sounded correct. These errors rates were unaffected by irrelevant sounds. Experiment 2 examined effects of irrelevant sounds on ordered recall of phonologically similar and dissimilar word lists. Phonological similarity impaired recall. Irrelevant speech reduced recall but did not interact with phonological similarity. The results of these experiments question assumptions about the relationship between speech input and phonological coding in reading and the short-term store.
Freye, Chris E; Fitz, Brian D; Billingsley, Matthew C; Synovec, Robert E
2016-06-01
The chemical composition and several physical properties of RP-1 fuels were studied using comprehensive two-dimensional (2D) gas chromatography (GC×GC) coupled with flame ionization detection (FID). A "reversed column" GC×GC configuration was implemented with a RTX-wax column on the first dimension ((1)D), and a RTX-1 as the second dimension ((2)D). Modulation was achieved using a high temperature diaphragm valve mounted directly in the oven. Using leave-one-out cross-validation (LOOCV), the summed GC×GC-FID signal of three compound-class selective 2D regions (alkanes, cycloalkanes, and aromatics) was regressed against previously measured ASTM derived values for these compound classes, yielding root mean square errors of cross validation (RMSECV) of 0.855, 0.734, and 0.530mass%, respectively. For comparison, using partial least squares (PLS) analysis with LOOCV, the GC×GC-FID signal of the entire 2D separations was regressed against the same ASTM values, yielding a linear trend for the three compound classes (alkanes, cycloalkanes, and aromatics), yielding RMSECV values of 1.52, 2.76, and 0.945 mass%, respectively. Additionally, a more detailed PLS analysis was undertaken of the compounds classes (n-alkanes, iso-alkanes, mono-, di-, and tri-cycloalkanes, and aromatics), and of physical properties previously determined by ASTM methods (such as net heat of combustion, hydrogen content, density, kinematic viscosity, sustained boiling temperature and vapor rise temperature). Results from these PLS studies using the relatively simple to use and inexpensive GC×GC-FID instrumental platform are compared to previously reported results using the GC×GC-TOFMS instrumental platform. Copyright © 2016 Elsevier B.V. All rights reserved.
Crown, Scott B; Kelleher, Joanne K; Rouf, Rosanne; Muoio, Deborah M; Antoniewicz, Maciek R
2016-10-01
In many forms of cardiomyopathy, alterations in energy substrate metabolism play a key role in disease pathogenesis. Stable isotope tracing in rodent heart perfusion systems can be used to determine cardiac metabolic fluxes, namely those relative fluxes that contribute to pyruvate, the acetyl-CoA pool, and pyruvate anaplerosis, which are critical to cardiac homeostasis. Methods have previously been developed to interrogate these relative fluxes using isotopomer enrichments of measured metabolites and algebraic equations to determine a predefined metabolic flux model. However, this approach is exquisitely sensitive to measurement error, thus precluding accurate relative flux parameter determination. In this study, we applied a novel mathematical approach to determine relative cardiac metabolic fluxes using 13 C-metabolic flux analysis ( 13 C-MFA) aided by multiple tracer experiments and integrated data analysis. Using 13 C-MFA, we validated a metabolic network model to explain myocardial energy substrate metabolism. Four different 13 C-labeled substrates were queried (i.e., glucose, lactate, pyruvate, and oleate) based on a previously published study. We integrated the analysis of the complete set of isotopomer data gathered from these mouse heart perfusion experiments into a single comprehensive network model that delineates substrate contributions to both pyruvate and acetyl-CoA pools at a greater resolution than that offered by traditional methods using algebraic equations. To our knowledge, this is the first rigorous application of 13 C-MFA to interrogate data from multiple tracer experiments in the perfused heart. We anticipate that this approach can be used widely to study energy substrate metabolism in this and other similar biological systems. Copyright © 2016 the American Physiological Society.
Kelleher, Joanne K.; Rouf, Rosanne; Muoio, Deborah M.; Antoniewicz, Maciek R.
2016-01-01
In many forms of cardiomyopathy, alterations in energy substrate metabolism play a key role in disease pathogenesis. Stable isotope tracing in rodent heart perfusion systems can be used to determine cardiac metabolic fluxes, namely those relative fluxes that contribute to pyruvate, the acetyl-CoA pool, and pyruvate anaplerosis, which are critical to cardiac homeostasis. Methods have previously been developed to interrogate these relative fluxes using isotopomer enrichments of measured metabolites and algebraic equations to determine a predefined metabolic flux model. However, this approach is exquisitely sensitive to measurement error, thus precluding accurate relative flux parameter determination. In this study, we applied a novel mathematical approach to determine relative cardiac metabolic fluxes using 13C-metabolic flux analysis (13C-MFA) aided by multiple tracer experiments and integrated data analysis. Using 13C-MFA, we validated a metabolic network model to explain myocardial energy substrate metabolism. Four different 13C-labeled substrates were queried (i.e., glucose, lactate, pyruvate, and oleate) based on a previously published study. We integrated the analysis of the complete set of isotopomer data gathered from these mouse heart perfusion experiments into a single comprehensive network model that delineates substrate contributions to both pyruvate and acetyl-CoA pools at a greater resolution than that offered by traditional methods using algebraic equations. To our knowledge, this is the first rigorous application of 13C-MFA to interrogate data from multiple tracer experiments in the perfused heart. We anticipate that this approach can be used widely to study energy substrate metabolism in this and other similar biological systems. PMID:27496880
Monico, Carla G; Rossetti, Sandro; Schwanz, Heidi A; Olson, Julie B; Lundquist, Patrick A; Dawson, D Brian; Harris, Peter C; Milliner, Dawn S
2007-06-01
Mutations in AGXT, a locus mapped to 2q37.3, cause deficiency of liver-specific alanine:glyoxylate aminotransferase (AGT), the metabolic error in type 1 primary hyperoxaluria (PH1). Genetic analysis of 55 unrelated probands with PH1 from the Mayo Clinic Hyperoxaluria Center, to date the largest with availability of complete sequencing across the entire AGXT coding region and documented hepatic AGT deficiency, suggests that a molecular diagnosis (identification of two disease alleles) is feasible in 96% of patients. Unique to this PH1 population was the higher frequency of G170R, the most common AGXT mutation, accounting for 37% of alleles, and detection of a new 3' end deletion (Ex 11_3'UTR del). A described frameshift mutation (c.33_34insC) occurred with the next highest frequency (11%), followed by F152I and G156R (frequencies of 6.3 and 4.5%, respectively), both surpassing the frequency (2.7%) of I244T, the previously reported third most common pathogenic change. These sequencing data indicate that AGXT is even more variable than formerly believed, with 28 new variants (21 mutations and seven polymorphisms) detected, with highest frequencies on exons 1, 4, and 7. When limited to these three exons, molecular analysis sensitivity was 77%, compared with 98% for whole-gene sequencing. These are the first data in support of comprehensive AGXT analysis for the diagnosis of PH1, obviating a liver biopsy in most well-characterized patients. Also reported here is previously unavailable evidence for the pathogenic basis of all AGXT missense variants, including evolutionary conservation data in a multisequence alignment and use of a normal control population.
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1997-01-01
We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a testbed for the use of the distortion representation of forecast errors, (2) act as one means of validating the GEOS data assimilation system and (3) help to describe the impact of the ERS 1 scatterometer data.