A Framework for Performing V&V within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
The Application of V&V within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward
1996-01-01
Verification and Validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In reuse-based software engineering, decisions on the requirements, design and even implementation of domain assets can can be made prior to beginning development of a specific system. in order to bring the effectiveness of V&V to bear within reuse-based software engineering. V&V must be incorporated within the domain engineering process.
The Need for V&V in Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
V&V is currently performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to entire' domain or product line rather than a critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. engineering. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for activities.
A Framework for Performing Verification and Validation in Reuse Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
NASA Technical Reports Server (NTRS)
Thome, K.
2016-01-01
Knowledge of uncertainties and errors are essential for comparisons of remote sensing data across time, space, and spectral domains. Vicarious radiometric calibration is used to demonstrate the need for uncertainty knowledge and to provide an example error budget. The sample error budget serves as an example of the questions and issues that need to be addressed by the calibrationvalidation community as accuracy requirements for imaging spectroscopy data will continue to become more stringent in the future. Error budgets will also be critical to ensure consistency between the range of imaging spectrometers expected to be launched in the next five years.
Phipps, Denham L; Tam, W Vanessa; Ashcroft, Darren M
2017-03-01
To explore the combined use of a critical incident database and work domain analysis to understand patient safety issues in a health-care setting. A retrospective review was conducted of incidents reported to the UK National Reporting and Learning System (NRLS) that involved community pharmacy between April 2005 and August 2010. A work domain analysis of community pharmacy was constructed using observational data from 5 community pharmacies, technical documentation, and a focus group with 6 pharmacists. Reports from the NRLS were mapped onto the model generated by the work domain analysis. Approximately 14,709 incident reports meeting the selection criteria were retrieved from the NRLS. Descriptive statistical analysis of these reports found that almost all of the incidents involved medication and that the most frequently occurring error types were dose/strength errors, incorrect medication, and incorrect formulation. The work domain analysis identified 4 overall purposes for community pharmacy: business viability, health promotion and clinical services, provision of medication, and use of medication. These purposes were served by lower-order characteristics of the work system (such as the functions, processes and objects). The tasks most frequently implicated in the incident reports were those involving medication storage, assembly, or patient medication records. Combining the insights from different analytical methods improves understanding of patient safety problems. Incident reporting data can be used to identify general patterns, whereas the work domain analysis can generate information about the contextual factors that surround a critical task.
Cheng, Ching-Min; Hwang, Sheue-Ling
2015-03-01
This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Seelye, Adriana M.; Schmitter-Edgecombe, Maureen; Cook, Diane J.; Crandall, Aaron
2014-01-01
Older adults with mild cognitive impairment (MCI) often have difficulty performing complex instrumental activities of daily living (IADLs), which are critical to independent living. In this study, amnestic multi-domain MCI (N = 29), amnestic single-domain MCI (N = 18), and healthy older participants (N = 47) completed eight scripted IADLs (e.g., cook oatmeal on the stove) in a smart apartment testbed. We developed and experimented with a graded hierarchy of technology-based prompts to investigate both the amount of prompting and type of prompts required to assist individuals with MCI in completing the activities. When task errors occurred, progressive levels of assistance were provided, starting with the lowest level needed to adjust performance. Results showed that the multi-domain MCI group made more errors and required more prompts than the single-domain MCI and healthy older adult groups. Similar to the other two groups, the multi-domain MCI group responded well to the indirect prompts and did not need a higher level of prompting to get back on track successfully with the tasks. Need for prompting assistance was best predicted by verbal memory abilities in multi-domain amnestic MCI. Participants across groups indicated that they perceived the prompting technology to be very helpful. PMID:23351284
Reducing number entry errors: solving a widespread, serious problem.
Thimbleby, Harold; Cairns, Paul
2010-10-06
Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).
[Patient safety and errors in medicine: development, prevention and analyses of incidents].
Rall, M; Manser, T; Guggenberger, H; Gaba, D M; Unertl, K
2001-06-01
"Patient safety" and "errors in medicine" are issues gaining more and more prominence in the eyes of the public. According to newer studies, errors in medicine are among the ten major causes of death in association with the whole area of health care. A new era has begun incorporating attention to a "systems" approach to deal with errors and their causes in the health system. In other high-risk domains with a high demand for safety (such as the nuclear power industry and aviation) many strategies to enhance safety have been established. It is time to study these strategies, to adapt them if necessary and apply them to the field of medicine. These strategies include: to teach people how errors evolve in complex working domains and how types of errors are classified; the introduction of critical incident reporting systems that are free of negative consequences for the reporters; the promotion of continuous medical education; and the development of generic problem-solving skills incorporating the extensive use of realistic simulators wherever possible. Interestingly, the field of anesthesiology--within which realistic simulators were developed--is referred to as a model for the new patient safety movement. Despite this proud track record in recent times though, there is still much to be done even in the field of anesthesiology. Overall though, the most important strategy towards a long-term improvement in patient safety will be a change of "culture" throughout the entire health care system. The "culture of blame" focused on individuals should be replaced by a "safety culture", that sees errors and critical incidents as a problem of the whole organization. The acceptance of human fallability and an open-minded non-punitive analysis of errors in the sense of a "preventive and proactive safety culture" should lead to solutions at the systemic level. This change in culture can only be achieved with a strong commitment from the highest levels of an organization. Patient safety must have the highest priority in the goals of the institution: "Primum nihil nocere"--"First, do not harm".
Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction.
Shin, Jaeho; Ré, Christopher; Cafarella, Michael
2015-08-01
End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger.
Modeling And Detecting Anomalies In Scada Systems
NASA Astrophysics Data System (ADS)
Svendsen, Nils; Wolthusen, Stephen
The detection of attacks and intrusions based on anomalies is hampered by the limits of specificity underlying the detection techniques. However, in the case of many critical infrastructure systems, domain-specific knowledge and models can impose constraints that potentially reduce error rates. At the same time, attackers can use their knowledge of system behavior to mask their manipulations, causing adverse effects to observed only after a significant period of time. This paper describes elementary statistical techniques that can be applied to detect anomalies in critical infrastructure networks. A SCADA system employed in liquefied natural gas (LNG) production is used as a case study.
Schmidt, Frank L; Le, Huy; Ilies, Remus
2003-06-01
On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.
Ferraro, Jeffrey P; Daumé, Hal; Duvall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J
2013-01-01
Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. The evaluated POS taggers drop in accuracy by 8.5-15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3-91.0% on clinical texts. ClinAdapt reports 93.2-93.9%. ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks.
Expertise effects in the Moses illusion: detecting contradictions with stored knowledge.
Cantor, Allison D; Marsh, Elizabeth J
2017-02-01
People frequently miss contradictions with stored knowledge; for example, readers often fail to notice any problem with a reference to the Atlantic as the largest ocean. Critically, such effects occur even though participants later demonstrate knowing the Pacific is the largest ocean (the Moses Illusion) [Erickson, T. D., & Mattson, M. E. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning & Verbal Behavior, 20, 540-551]. We investigated whether such oversights disappear when erroneous references contradict information in one's expert domain, material which likely has been encountered many times and is particularly well-known. Biology and history graduate students monitored for errors while answering biology and history questions containing erroneous presuppositions ("In what US state were the forty-niners searching for oil?"). Expertise helped: participants were less susceptible to the illusion and less likely to later reproduce errors in their expert domain. However, expertise did not eliminate the illusion, even when errors were bolded and underlined, meaning that it was unlikely that people simply skipped over errors. The results support claims that people often use heuristics to judge truth, as opposed to directly retrieving information from memory, likely because such heuristics are adaptive and often lead to the correct answer. Even experts sometimes use such shortcuts, suggesting that overlearned and accessible knowledge does not guarantee retrieval of that information.
Miller, Ryan J
2010-11-01
In an era when social media sites like YouTube, Facebook, and Twitter dominate the popular press, many surgeons overlook the foundational tactics and strategies necessary for long-term practice development and lead generation on the Internet. This article analyzes common errors made by surgeons during the development and implementation of Web site projects, focusing on the areas of strategy development; domain name identification; site plan creation; design considerations; content development; vendor selection; and launch, promotion, and staff training. The article emphasizes that, because the Web site remains the foundation of a surgeon's branding message, minimizing errors during development and construction is critical, particularly in highly competitive or saturated markets, for today's facial plastic surgery practice. Copyright © 2010 Elsevier Inc. All rights reserved.
Human Factors in Financial Trading
Leaver, Meghan; Reader, Tom W.
2016-01-01
Objective This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Background Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors–related issues in operational trading incidents. Method In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Results Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors–related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. Conclusion We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. Application This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. PMID:27142394
Human Factors in Financial Trading: An Analysis of Trading Incidents.
Leaver, Meghan; Reader, Tom W
2016-09-01
This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors-related issues in operational trading incidents. In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors-related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. © 2016, Human Factors and Ergonomics Society.
Practical robustness measures in multivariable control system analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.
1981-01-01
The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.
Guo, Shuaijun; Davis, Elise; Yu, Xiaoming; Naccarella, Lucio; Armstrong, Rebecca; Abel, Thomas; Browne, Geoffrey; Shi, Yanqin
2018-04-01
Health literacy is an increasingly important topic in the global context. In mainland China, health literacy measures mainly focus on health knowledge and practices or on the functional domain for adolescents. However, little is known about interactive and critical domains. This study aimed to adopt a skills-based and three-domain (functional, interactive and critical) instrument to measure health literacy in Chinese adolescents and to examine the status and determinants of each domain. Using a systematic review, the eight-item Health Literacy Assessment Tool (HLAT-8) was selected and translated from English to Chinese (c-HLAT-8). Following the translation process, a cross-sectional study was conducted in four secondary schools in Beijing, China. A total of 650 students in Years 7-9 were recruited to complete a self-administered questionnaire that assessed socio-demographics, self-efficacy, social support, school environment, community environment and health literacy. Results showed that the c-HLAT-8 had satisfactory reliability (Cronbach's α = 0.79; intra-class correlation coefficient = 0.72) and strong validity (translation validity index (TVI) ≥0.95; χ 2 / df = 3.388, p < 0.001; comparative fit index = 0.975, Tucker and Lewis's index of fit = 0.945, normed fit index = 0.965, root mean error of approximation = 0.061; scores on the c-HLAT-8 were moderately correlated with the Health Literacy Study-Taiwan, but weakly with the Newest Vital Sign). Chinese students had an average score of 26.37 (±5.89) for the c-HLAT-8. When the determinants of each domain of health literacy were examined, social support was the strongest predictor of interactive and critical health literacy. On the contrary, self-efficacy and school environment played more dominant roles in predicting functional health literacy. The c-HLAT-8 was demonstrated to be a reliable, valid and feasible instrument for measuring functional, interactive and critical health literacy among Chinese students. The current findings indicate that increasing self-efficacy, social support and creating supportive environments are important for promoting health literacy in secondary school settings in China.
A topological hierarchy for functions on triangulated surfaces.
Bremer, Peer-Timo; Edelsbrunner, Herbert; Hamann, Bernd; Pascucci, Valerio
2004-01-01
We combine topological and geometric methods to construct a multiresolution representation for a function over a two-dimensional domain. In a preprocessing stage, we create the Morse-Smale complex of the function and progressively simplify its topology by cancelling pairs of critical points. Based on a simple notion of dependency among these cancellations, we construct a hierarchical data structure supporting traversal and reconstruction operations similarly to traditional geometry-based representations. We use this data structure to extract topologically valid approximations that satisfy error bounds provided at runtime.
Colen, Hadewig B; Neef, Cees; Schuring, Roel W
2003-06-01
Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
NASA Technical Reports Server (NTRS)
Rummel, R.
1975-01-01
Integral formulas in the parameter domain are used instead of a representation by spherical harmonics. The neglected regions will cause a truncation error. The application of the discrete form of the integral equations connecting the satellite observations with surface gravity anomalies is discussed in comparison with the least squares prediction method. One critical point of downward continuation is the proper choice of the boundary surface. Practical feasibilities are in conflict with theoretical considerations. The properties of different approaches for this question are analyzed.
Ribeliene, Janina; Blazeviciene, Aurelija; Nadisauskiene, Ruta Jolanta; Tameliene, Rasa; Kudreviciene, Ausrele; Nedzelskiene, Irena; Macijauskiene, Jurate
2018-04-22
Patients treated in health care facilities that provide services in the fields of obstetrics, gynecology, and neonatology are especially vulnerable. Large multidisciplinary teams of physicians, multiple invasive and noninvasive diagnostic and therapeutic procedures, and the use of advanced technologies increase the probability of adverse events. The evaluation of knowledge about patient safety culture among nurses and midwives working in such units and the identification of critical areas at a health care institution would reduce the number of adverse events and improve patient safety. The aim of the study was to evaluate the opinion of nurses and midwives working in clinical departments that provide services in the fields of obstetrics, gynecology, and neonatology about patient safety culture and to explore potential predictors for the overall perception of safety. We used the Hospital Survey on Patient Safety Culture (HSOPSC) to evaluate nurses' and midwives' opinion about patient safety issues. The overall response rate in the survey was 100% (n = 233). The analysis of the dimensions of safety on the unit level showed that the respondents' most positive evaluations were in the Organizational Learning - Continuous Improvement (73.2%) and Feedback and Communication about Error (66.8%) dimensions, and the most negative evaluations in the Non-punitive Response to Error (33.5%) and Staffing (44.6%) dimensions. On the hospital level, the evaluation of the safety dimensions ranged between 41.4 and 56.8%. The percentage of positive responses in the outcome dimensions Frequency of Events Reported was 82.4%. We found a significant association between the outcome dimension Frequency of Events Reported and the Hospital Management Support for Patient Safety and Feedback and Communication about Error Dimensions. On the hospital level, the critical domains in health care facilities that provide services in the fields of obstetrics, gynecology, and neonatology were Teamwork Across Hospital Units, and on the unit level - Communication Openness, Teamwork Within Units, Non-punitive Response to Error, and Staffing. The remaining domains were seen as having a potential for improvement.
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
ERIC Educational Resources Information Center
Le, Nguyen-Thinh; Menzel, Wolfgang
2009-01-01
In this paper, we introduce logic programming as a domain that exhibits some characteristics of being ill-defined. In order to diagnose student errors in such a domain, we need a means to hypothesise the student's intention, that is the strategy underlying her solution. This is achieved by weighting constraints, so that hypotheses about solution…
NASA Technical Reports Server (NTRS)
Penny, Stephen G.; Akella, Santha; Buehner, Mark; Chevallier, Matthieu; Counillon, Francois; Draper, Clara; Frolov, Sergey; Fujii, Yosuke; Karspeck, Alicia; Kumar, Arun
2017-01-01
The purpose of this report is to identify fundamental issues for coupled data assimilation (CDA), such as gaps in science and limitations in forecasting systems, in order to provide guidance to the World Meteorological Organization (WMO) on how to facilitate more rapid progress internationally. Coupled Earth system modeling provides the opportunity to extend skillful atmospheric forecasts beyond the traditional two-week barrier by extracting skill from low-frequency state components such as the land, ocean, and sea ice. More generally, coupled models are needed to support seamless prediction systems that span timescales from weather, subseasonal to seasonal (S2S), multiyear, and decadal. Therefore, initialization methods are needed for coupled Earth system models, either applied to each individual component (called Weakly Coupled Data Assimilation - WCDA) or applied the coupled Earth system model as a whole (called Strongly Coupled Data Assimilation - SCDA). Using CDA, in which model forecasts and potentially the state estimation are performed jointly, each model domain benefits from observations in other domains either directly using error covariance information known at the time of the analysis (SCDA), or indirectly through flux interactions at the model boundaries (WCDA). Because the non-atmospheric domains are generally under-observed compared to the atmosphere, CDA provides a significant advantage over single-domain analyses. Next, we provide a synopsis of goals, challenges, and recommendations to advance CDA: Goals: (a) Extend predictive skill beyond the current capability of NWP (e.g. as demonstrated by improving forecast skill scores), (b) produce physically consistent initial conditions for coupled numerical prediction systems and reanalyses (including consistent fluxes at the domain interfaces), (c) make best use of existing observations by allowing observations from each domain to influence and improve the full earth system analysis, (d) develop a robust observation-based identification and understanding of mechanisms that determine the variability of weather and climate, (e) identify critical weaknesses in coupled models and the earth observing system, (f) generate full-field estimates of unobserved or sparsely observed variables, (g) improve the estimation of the external forcings causing changes to climate, (h) transition successes from idealized CDA experiments to real-world applications. Challenges: (a) Modeling at the interfaces between interacting components of coupled Earth system models may be inadequate for estimating uncertainty or error covariances between domains, (b) current data assimilation methods may be insufficient to simultaneously analyze domains containing multiple spatiotemporal scales of interest, (c) there is no standardization of observation data or their delivery systems across domains, (d) the size and complexity of many large-scale coupled Earth system models makes it is difficult to accurately represent uncertainty due to model parameters and coupling parameters, (e) model errors lead to local biases that can transfer between the different Earth system components and lead to coupled model biases and long-term model drift, (e) information propagation across model components with different spatiotemporal scales is extremely complicated, and must be improved in current coupled modeling frameworks, (h) there is insufficient knowledge on how to represent evolving errors in non-atmospheric model components (e.g. as sea ice, land and ocean) on the timescales of NWP.
Broadband CARS spectral phase retrieval using a time-domain Kramers–Kronig transform
Liu, Yuexin; Lee, Young Jong; Cicerone, Marcus T.
2014-01-01
We describe a closed-form approach for performing a Kramers–Kronig (KK) transform that can be used to rapidly and reliably retrieve the phase, and thus the resonant imaginary component, from a broadband coherent anti-Stokes Raman scattering (CARS) spectrum with a nonflat background. In this approach we transform the frequency-domain data to the time domain, perform an operation that ensures a causality criterion is met, then transform back to the frequency domain. The fact that this method handles causality in the time domain allows us to conveniently account for spectrally varying nonresonant background from CARS as a response function with a finite rise time. A phase error accompanies KK transform of data with finite frequency range. In examples shown here, that phase error leads to small (<1%) errors in the retrieved resonant spectra. PMID:19412273
Factors contributing to registered nurse medication administration error: a narrative review.
Parry, Angela M; Barriball, K Louise; While, Alison E
2015-01-01
To explore the factors contributing to Registered Nurse medication administration error behaviour. A narrative review. Electronic databases (Cochrane, CINAHL, MEDLINE, BNI, EmBase, and PsycINFO) were searched from 1 January 1999 to 31 December 2012 in the English language. 1127 papers were identified and 26 papers were included in the review. Data were extracted by one reviewer and checked by a second reviewer. A thematic analysis and narrative synthesis of the factors contributing to Registered Nurses' medication administration behaviour. Bandura's (1986) theory of reciprocal determinism was used as an organising framework. This theory proposes that there is a reciprocal interplay between the environment, the person and their behaviour. Medication administration error is an outcome of RN behaviour. The 26 papers reported studies conducted in 4 continents across 11 countries predominantly in North America and Europe, with one multi-national study incorporating 27 countries. Within both the environment and person domain of the reciprocal determinism framework, a number of factors emerged as influencing Registered Nurse medication administration error behaviour. Within the environment domain, two key themes of clinical workload and work setting emerged, and within the person domain the Registered Nurses' characteristics and their lived experience of work emerged as themes. Overall, greater attention has been given to the contribution of the environment domain rather than the person domain as contributing to error, with the literature viewing an error as an event rather than the outcome of behaviour. The interplay between factors that influence behaviour were poorly accounted for within the selected studies. It is proposed that a shift away from error as an event to a focus on the relationships between the person, the environment and Registered Nurse medication administration behaviour is needed to better understand medication administration error. Copyright © 2014 Elsevier Ltd. All rights reserved.
A framework for discrete stochastic simulation on 3D moving boundary domains
Drawert, Brian; Hellander, Stefan; Trogdon, Michael; ...
2016-11-14
We have developed a method for modeling spatial stochastic biochemical reactions in complex, three-dimensional, and time-dependent domains using the reaction-diffusion master equation formalism. In particular, we look to address the fully coupled problems that arise in systems biology where the shape and mechanical properties of a cell are determined by the state of the biochemistry and vice versa. To validate our method and characterize the error involved, we compare our results for a carefully constructed test problem to those of a microscale implementation. Finally, we demonstrate the effectiveness of our method by simulating a model of polarization and shmoo formationmore » during the mating of yeast. The method is generally applicable to problems in systems biology where biochemistry and mechanics are coupled, and spatial stochastic effects are critical.« less
Wiegmann, D A; Shappell, S A
2001-11-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.
Sequence-structure mapping errors in the PDB: OB-fold domains
Venclovas, Česlovas; Ginalski, Krzysztof; Kang, Chulhee
2004-01-01
The Protein Data Bank (PDB) is the single most important repository of structural data for proteins and other biologically relevant molecules. Therefore, it is critically important to keep the PDB data, as much as possible, error-free. In this study, we have analyzed PDB crystal structures possessing oligonucleotide/oligosaccharide binding (OB)-fold, one of the highly populated folds, for the presence of sequence-structure mapping errors. Using energy-based structure quality assessment coupled with sequence analyses, we have found that there are at least five OB-structures in the PDB that have regions where sequences have been incorrectly mapped onto the structure. We have demonstrated that the combination of these computation techniques is effective not only in detecting sequence-structure mapping errors, but also in providing guidance to correct them. Namely, we have used results of computational analysis to direct a revision of X-ray data for one of the PDB entries containing a fairly inconspicuous sequence-structure mapping error. The revised structure has been deposited with the PDB. We suggest use of computational energy assessment and sequence analysis techniques to facilitate structure determination when homologs having known structure are available to use as a reference. Such computational analysis may be useful in either guiding the sequence-structure assignment process or verifying the sequence mapping within poorly defined regions. PMID:15133161
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.; ,
1985-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.
Chen, Xin-Lin; Zhong, Liang-Huan; Wen, Yi; Liu, Tian-Wen; Li, Xiao-Ying; Hou, Zheng-Kun; Hu, Yue; Mo, Chuan-Wei; Liu, Feng-Bin
2017-09-15
This review aims to critically appraise and compare the measurement properties of inflammatory bowel disease (IBD)-specific health-related quality of life instruments. Medline, EMBASE and ISI Web of Knowledge were searched from their inception to May 2016. IBD-specific instruments for patients with Crohn's disease, ulcerative colitis or IBD were enrolled. The basic characteristics and domains of the instruments were collected. The methodological quality of measurement properties and measurement properties of the instruments were assessed. Fifteen IBD-specific instruments were included, which included twelve instruments for adult IBD patients and three for paediatric IBD patients. All of the instruments were developed in North American and European countries. The following common domains were identified: IBD-related symptoms, physical, emotional and social domain. The methodological quality was satisfactory for content validity; fair in internal consistency, reliability, structural validity, hypotheses testing and criterion validity; and poor in measurement error, cross-cultural validity and responsiveness. For adult IBD patients, the IBDQ-32 and its short version (SIBDQ) had good measurement properties and were the most widely used worldwide. For paediatric IBD patients, the IMPACT-III had good measurement properties and had more translated versions. Most methodological quality should be promoted, especially measurement error, cross-cultural validity and responsiveness. The IBDQ-32 was the most widely used instrument with good reliability and validity, followed by the SIBDQ and IMPACT-III. Further validation studies are necessary to support the use of other instruments.
Nikolic, Mark I; Sarter, Nadine B
2007-08-01
To examine operator strategies for diagnosing and recovering from errors and disturbances as well as the impact of automation design and time pressure on these processes. Considerable efforts have been directed at error prevention through training and design. However, because errors cannot be eliminated completely, their detection, diagnosis, and recovery must also be supported. Research has focused almost exclusively on error detection. Little is known about error diagnosis and recovery, especially in the context of event-driven tasks and domains. With a confederate pilot, 12 airline pilots flew a 1-hr simulator scenario that involved three challenging automation-related tasks and events that were likely to produce erroneous actions or assessments. Behavioral data were compared with a canonical path to examine pilots' error and disturbance management strategies. Debriefings were conducted to probe pilots' system knowledge. Pilots seldom followed the canonical path to cope with the scenario events. Detection of a disturbance was often delayed. Diagnostic episodes were rare because of pilots' knowledge gaps and time criticality. In many cases, generic inefficient recovery strategies were observed, and pilots relied on high levels of automation to manage the consequences of an error. Our findings describe and explain the nature and shortcomings of pilots' error management activities. They highlight the need for improved automation training and design to achieve more timely detection, accurate explanation, and effective recovery from errors and disturbances. Our findings can inform the design of tools and techniques that support disturbance management in various complex, event-driven environments.
Clinical review: Medication errors in critical care
Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas
2008-01-01
Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J
2016-09-09
A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them
ERIC Educational Resources Information Center
Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.
2011-01-01
Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…
Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong
2017-12-01
Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.
Kaldjian, Lauris C; Jones, Elizabeth W; Rosenthal, Gary E; Tripp-Reimer, Toni; Hillis, Stephen L
2006-01-01
BACKGROUND Physician disclosure of medical errors to institutions, patients, and colleagues is important for patient safety, patient care, and professional education. However, the variables that may facilitate or impede disclosure are diverse and lack conceptual organization. OBJECTIVE To develop an empirically derived, comprehensive taxonomy of factors that affects voluntary disclosure of errors by physicians. DESIGN A mixed-methods study using qualitative data collection (structured literature search and exploratory focus groups), quantitative data transformation (sorting and hierarchical cluster analysis), and validation procedures (confirmatory focus groups and expert review). RESULTS Full-text review of 316 articles identified 91 impeding or facilitating factors affecting physicians' willingness to disclose errors. Exploratory focus groups identified an additional 27 factors. Sorting and hierarchical cluster analysis organized factors into 8 domains. Confirmatory focus groups and expert review relocated 6 factors, removed 2 factors, and modified 4 domain names. The final taxonomy contained 4 domains of facilitating factors (responsibility to patient, responsibility to self, responsibility to profession, responsibility to community), and 4 domains of impeding factors (attitudinal barriers, uncertainties, helplessness, fears and anxieties). CONCLUSIONS A taxonomy of facilitating and impeding factors provides a conceptual framework for a complex field of variables that affects physicians' willingness to disclose errors to institutions, patients, and colleagues. This taxonomy can be used to guide the design of studies to measure the impact of different factors on disclosure, to assist in the design of error-reporting systems, and to inform educational interventions to promote the disclosure of errors to patients. PMID:16918739
Characterization of Errors Inherent in System EMP Vulnerability Assessment Programs,
1980-10-01
Patriot system. * B-i aircraft. * E-3A airborne warning and control system aircraft. * PRC-77 radio. * Lance missile system. * Safeguard ABM system...carefully or the offset will create large frequency domain error. Frequency-tying, too, can improve f-domain data. Of the various recording sytems studied
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
NASA Astrophysics Data System (ADS)
Dehaes, Mathieu; Grant, P. Ellen; Sliva, Danielle D.; Roche-Labarbe, Nadège; Pienaar, Rudolph; Boas, David A.; Franceschini, Maria Angela; Selb, Juliette
2011-03-01
NIRS is safe, non-invasive and offers the possibility to record local hemodynamic parameters at the bedside, avoiding the transportation of neonates and critically ill patients. In this work, we evaluate the accuracy of the frequency-domain multi-distance (FD-MD) method to retrieve brain optical properties from neonate to adult. Realistic measurements are simulated using a 3D Monte Carlo modeling of light propagation. Height different ages were investigated: a term newborn of 38 weeks gestational age, two infants of 6 and 12 months of age, a toddler of 2 year (yr.) old, two children of 5 and 10 years of age, a teenager of 14 yr. old, and an adult. Measurements are generated at multiple distances on the right parietal area of head models and fitted to a homogeneous FD-MD model to estimate the brain optical properties. In the newborn, infants, toddler and 5 yr. old child models, the error was dominated by the head curvature, while the superficial layer in the 10 yr. old child, teenager and adult heads. The influence of the CSF is also evaluated. In this case, absorption coefficients suffer from an additional error. In all cases, measurements at 5 mm provided worse estimation because of the diffusion approximation.
Application of social domain of human mind in water management
NASA Astrophysics Data System (ADS)
Piirimäe, Kristjan
2010-05-01
Currently, researches dispute whether a human reasons domain-generally or domain-specifically (Fiddick, 2004). The theory of several intuitive reasoning programmes in human mind suggests that the main driver to increase problem-solving abilities is social domain (Byrne & Bates, 2009). This theory leads to an idea to apply the social domain also in environmental management. More specifically, environmental problems might be presented through social aspects. Cosmides (1989) proposed that the most powerful programme in our social domain might be ‘cheater detection module' - a genetically determined mental tool whose dedicated function is to unmask cheaters. She even suggested that only cheater detection can enable logical reasoning. Recently, this idea has found experimental proof and specifications (Buchner et al., 2009). From this perspective, a participatory environmental decision support system requires involvement of the representatives of social control such as environmental agencies and NGOs. These evaluators might effectively discover legal and moral inconsistencies, logical errors and other weaknesses in proposals if they are encouraged to detect cheating. Thus, instead of just environmental concerns, the query of an artificial intelligence should emphasize cheating. Following the idea of Cosmides (1989), employment of cheater detectors to EDSS might appear the only way to achieve environmental management which applies correct logical reasoning as well as both, legislative requirements and conservationist moral. According to our hypothesis, representatives of social control can well discover legal and moral inconsistencies, logical errors and and other weaknesses in envirionmental management proposals if encouraged for cheater detection. In our social experiment, a draft plan of measures for sustainable management of Lake Peipsi environment was proposed to representatives of social control, including Ministry of Environment, other environmental authorities, and NGOs. These people were randomly divided to two working groups and asked to criticize the proposed plan. One group was encouraged to detect cheating behind the plan. Later, a group of independent experts evaluated the criticism of both groups and each individual person. The resulting assignements rated the group of cheater detectors as significantly more adequate decision-supporters. The results confirmed that simulation of the 'cheater detection module' of human mind might improve the performance of an EDSS. The study calls for the development of special methodologies for the stimulation and application of social domain in water management. References Buchner, A., Bell, R., Mehl, B., & Musch, J., (2009). No enhanced recognition memory, but better source memory for faces of cheaters. Evolution and Human Behaviour, 30(3), 212 - 224. Byrne, R., Bates, L. (2009). Sociality, evolution and cognition. Current Biology, 17(16), R714 - R723. Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31(3), 187-276. Fiddick, L. (2004). Domains of deontic reasoning: Resolving the discrepancy between the cognitive and moral reasoning literatures. The Quartlerly Journal of Experimental Psychology, 57A(3), 447 - 474.
Automatic Identification of Critical Follow-Up Recommendation Sentences in Radiology Reports
Yetisgen-Yildiz, Meliha; Gunn, Martin L.; Xia, Fei; Payne, Thomas H.
2011-01-01
Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. When recommendations are not systematically identified and promptly communicated to referrers, poor patient outcomes can result. Using information technology can improve communication and improve patient safety. In this paper, we describe a text processing approach that uses natural language processing (NLP) and supervised text classification methods to automatically identify critical recommendation sentences in radiology reports. To increase the classification performance we enhanced the simple unigram token representation approach with lexical, semantic, knowledge-base, and structural features. We tested different combinations of those features with the Maximum Entropy (MaxEnt) classification algorithm. Classifiers were trained and tested with a gold standard corpus annotated by a domain expert. We applied 5-fold cross validation and our best performing classifier achieved 95.60% precision, 79.82% recall, 87.0% F-score, and 99.59% classification accuracy in identifying the critical recommendation sentences in radiology reports. PMID:22195225
Automatic identification of critical follow-up recommendation sentences in radiology reports.
Yetisgen-Yildiz, Meliha; Gunn, Martin L; Xia, Fei; Payne, Thomas H
2011-01-01
Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. When recommendations are not systematically identified and promptly communicated to referrers, poor patient outcomes can result. Using information technology can improve communication and improve patient safety. In this paper, we describe a text processing approach that uses natural language processing (NLP) and supervised text classification methods to automatically identify critical recommendation sentences in radiology reports. To increase the classification performance we enhanced the simple unigram token representation approach with lexical, semantic, knowledge-base, and structural features. We tested different combinations of those features with the Maximum Entropy (MaxEnt) classification algorithm. Classifiers were trained and tested with a gold standard corpus annotated by a domain expert. We applied 5-fold cross validation and our best performing classifier achieved 95.60% precision, 79.82% recall, 87.0% F-score, and 99.59% classification accuracy in identifying the critical recommendation sentences in radiology reports.
2016-01-01
Time domain cyclic-selective mapping (TDC-SLM) reduces the peak-to-average power ratio (PAPR) in OFDM systems while the amounts of cyclic shifts are required to recover the transmitted signal in a receiver. One of the critical issues of the SLM scheme is sending the side information (SI) which reduces the throughputs in wireless OFDM systems. The proposed scheme implements delayed correlation and matched filtering (DC-MF) to estimate the amounts of the cyclic shifts in the receiver. In the proposed scheme, the DC-MF is placed after the frequency domain equalization (FDE) to improve the accuracy of cyclic shift estimation. The accuracy rate of the propose scheme reaches 100% at E b/N 0 = 5 dB and the bit error rate (BER) improves by 0.2 dB as compared with the conventional TDC-SLM. The BER performance of the proposed scheme is also better than that of the conventional TDC-SLM even though a nonlinear high power amplifier is assumed. PMID:27752539
Beyond crisis resource management: new frontiers in human factors training for acute care medicine.
Petrosoniak, Andrew; Hicks, Christopher M
2013-12-01
Error is ubiquitous in medicine, particularly during critical events and resuscitation. A significant proportion of adverse events can be attributed to inadequate team-based skills such as communication, leadership, situation awareness and resource utilization. Aviation-based crisis resource management (CRM) training using high-fidelity simulation has been proposed as a strategy to improve team behaviours. This review will address key considerations in CRM training and outline recommendations for the future of human factors education in healthcare. A critical examination of the current literature yields several important considerations to guide the development and implementation of effective simulation-based CRM training. These include defining a priori domain-specific objectives, creating an immersive environment that encourages deliberate practice and transfer-appropriate processing, and the importance of effective team debriefing. Building on research from high-risk industry, we suggest that traditional CRM training may be augmented with new training techniques that promote the development of shared mental models for team and task processes, address the effect of acute stress on team performance, and integrate strategies to improve clinical reasoning and the detection of cognitive errors. The evolution of CRM training involves a 'Triple Threat' approach that integrates mental model theory for team and task processes, training for stressful situations and metacognition and error theory towards a more comprehensive training paradigm, with roots in high-risk industry and cognitive psychology. Further research is required to evaluate the impact of this approach on patient-oriented outcomes.
Hird, Megan A; Vesely, Kristin A; Fischer, Corinne E; Graham, Simon J; Naglie, Gary; Schweizer, Tom A
2017-01-01
The areas of driving impairment characteristic of mild cognitive impairment (MCI) remain unclear. This study compared the simulated driving performance of 24 individuals with MCI, including amnestic single-domain (sd-MCI, n = 11) and amnestic multiple-domain MCI (md-MCI, n = 13), and 20 age-matched controls. Individuals with MCI committed over twice as many driving errors (20.0 versus 9.9), demonstrated difficulty with lane maintenance, and committed more errors during left turns with traffic compared to healthy controls. Specifically, individuals with md-MCI demonstrated greater driving difficulty compared to healthy controls, relative to those with sd-MCI. Differentiating between different subtypes of MCI may be important when evaluating driving safety.
Errorless Learning in Cognitive Rehabilitation: A Critical Review
Middleton, Erica L.; Schwartz, Myrna F.
2012-01-01
Cognitive rehabilitation research is increasingly exploring errorless learning interventions, which prioritize the avoidance of errors during treatment. The errorless learning approach was originally developed for patients with severe anterograde amnesia, who were deemed to be at particular risk for error learning. Errorless learning has since been investigated in other memory-impaired populations (e.g., Alzheimer's disease) and acquired aphasia. In typical errorless training, target information is presented to the participant for study or immediate reproduction, a method that prevents participants from attempting to retrieve target information from long-term memory (i.e., retrieval practice). However, assuring error elimination by preventing difficult (and error-permitting) retrieval practice is a potential major drawback of the errorless approach. This review begins with discussion of research in the psychology of learning and memory that demonstrates the importance of difficult (and potentially errorful) retrieval practice for robust learning and prolonged performance gains. We then review treatment research comparing errorless and errorful methods in amnesia and aphasia, where only the latter provides (difficult) retrieval practice opportunities. In each clinical domain we find the advantage of the errorless approach is limited and may be offset by the therapeutic potential of retrieval practice. Gaps in current knowledge are identified that preclude strong conclusions regarding a preference for errorless treatments over methods that prioritize difficult retrieval practice. We offer recommendations for future research aimed at a strong test of errorless learning treatments, which involves direct comparison with methods where retrieval practice effects are maximized for long-term gains. PMID:22247957
A prospective audit of a nurse independent prescribing within critical care.
Carberry, Martin; Connelly, Sarah; Murphy, Jennifer
2013-05-01
To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.
NASA Astrophysics Data System (ADS)
Horstmann, Jan Tobias; Le Garrec, Thomas; Mincu, Daniel-Ciprian; Lévêque, Emmanuel
2017-11-01
Despite the efficiency and low dissipation of the stream-collide scheme of the discrete-velocity Boltzmann equation, which is nowadays implemented in many lattice Boltzmann solvers, a major drawback exists over alternative discretization schemes, i.e. finite-volume or finite-difference, that is the limitation to Cartesian uniform grids. In this paper, an algorithm is presented that combines the positive features of each scheme in a hybrid lattice Boltzmann method. In particular, the node-based streaming of the distribution functions is coupled with a second-order finite-volume discretization of the advection term of the Boltzmann equation under the Bhatnagar-Gross-Krook approximation. The algorithm is established on a multi-domain configuration, with the individual schemes being solved on separate sub-domains and connected by an overlapping interface of at least 2 grid cells. A critical parameter in the coupling is the CFL number equal to unity, which is imposed by the stream-collide algorithm. Nevertheless, a semi-implicit treatment of the collision term in the finite-volume formulation allows us to obtain a stable solution for this condition. The algorithm is validated in the scope of three different test cases on a 2D periodic mesh. It is shown that the accuracy of the combined discretization schemes agrees with the order of each separate scheme involved. The overall numerical error of the hybrid algorithm in the macroscopic quantities is contained between the error of the two individual algorithms. Finally, we demonstrate how such a coupling can be used to adapt to anisotropic flows with some gradual mesh refinement in the FV domain.
Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.
Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu
2015-10-01
Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.
Daud-Gallotti, Renata Mahfuz; Morinaga, Christian Valle; Arlindo-Rodrigues, Marcelo; Velasco, Irineu Tadeu; Arruda Martins, Milton; Tiberio, Iolanda Calvo
2011-01-01
INTRODUCTION: Patient safety is seldom assessed using objective evaluations during undergraduate medical education. OBJECTIVE: To evaluate the performance of fifth-year medical students using an objective structured clinical examination focused on patient safety after implementation of an interactive program based on adverse events recognition and disclosure. METHODS: In 2007, a patient safety program was implemented in the internal medicine clerkship of our hospital. The program focused on human error theory, epidemiology of incidents, adverse events, and disclosure. Upon completion of the program, students completed an objective structured clinical examination with five stations and standardized patients. One station focused on patient safety issues, including medical error recognition/disclosure, the patient-physician relationship and humanism issues. A standardized checklist was completed by each standardized patient to assess the performance of each student. The student's global performance at each station and performance in the domains of medical error, the patient-physician relationship and humanism were determined. The correlations between the student performances in these three domains were calculated. RESULTS: A total of 95 students participated in the objective structured clinical examination. The mean global score at the patient safety station was 87.59±1.24 points. Students' performance in the medical error domain was significantly lower than their performance on patient-physician relationship and humanistic issues. Less than 60% of students (n = 54) offered the simulated patient an apology after a medical error occurred. A significant correlation was found between scores obtained in the medical error domains and scores related to both the patient-physician relationship and humanistic domains. CONCLUSIONS: An objective structured clinical examination is a useful tool to evaluate patient safety competencies during the medical student clerkship. PMID:21876976
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
A method on error analysis for large-aperture optical telescope control system
NASA Astrophysics Data System (ADS)
Su, Yanrui; Wang, Qiang; Yan, Fabao; Liu, Xiang; Huang, Yongmei
2016-10-01
For large-aperture optical telescope, compared with the performance of azimuth in the control system, arc second-level jitters exist in elevation under different speeds' working mode, especially low-speed working mode in the process of its acquisition, tracking and pointing. The jitters are closely related to the working speed of the elevation, resulting in the reduction of accuracy and low-speed stability of the telescope. By collecting a large number of measured data to the elevation, we do analysis on jitters in the time domain, frequency domain and space domain respectively. And the relation between jitter points and the leading speed of elevation and the corresponding space angle is concluded that the jitters perform as periodic disturbance in space domain and the period of the corresponding space angle of the jitter points is 79.1″ approximately. Then we did simulation, analysis and comparison to the influence of the disturbance sources, like PWM power level output disturbance, torque (acceleration) disturbance, speed feedback disturbance and position feedback disturbance on the elevation to find that the space periodic disturbance still exist in the elevation performance. It leads us to infer that the problems maybe exist in angle measurement unit. The telescope employs a 24-bit photoelectric encoder and we can calculate the encoder grating angular resolution as 79.1016'', which is as the corresponding angle value in the whole encoder system of one period of the subdivision signal. The value is approximately equal to the space frequency of the jitters. Therefore, the working elevation of the telescope is affected by subdivision errors and the period of the subdivision error is identical to the period of encoder grating angular. Through comprehensive consideration and mathematical analysis, that DC subdivision error of subdivision error sources causes the jitters is determined, which is verified in the practical engineering. The method that analyze error sources from time domain, frequency domain and space domain respectively has a very good role in guiding to find disturbance sources for large-aperture optical telescope.
Making Sense of Missense in the Lynch Syndrome: The Clinical Perspective
Lynch, Henry T.; Jascur, Thomas; Lanspa, Stephen; Boland, C. Richard
2010-01-01
The DNA mismatch repair system provides critical genetic housekeeping, and its failure is associated with tumorigenesis. Through distinct domains on the DNA mismatch repair proteins, the system recognizes and repairs errors occurring during DNA synthesis, but signals apoptosis when the DNA damage cannot be repaired. Certain missense mutations in the mismatch repair genes can selectively alter just one of these functions. This impacts the clinical features of tumors associated with defective DNA mismatch repair activity. New work reported by Xie et al. in this issue of the journal (beginning on page XXX) adds to the understanding of DNA mismatch repair. PMID:20978117
Composable Framework Support for Software-FMEA Through Model Execution
NASA Astrophysics Data System (ADS)
Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco
2016-08-01
Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.
NASA Astrophysics Data System (ADS)
Hwang, Jin Hwan; Pham, Van Sy
2017-04-01
The Big-Brother Experiment (BBE) evaluates the effect of domain size on the ocean regional circulation model (ORCMs) in the downscaling and nesting from the ocean global circulation (OGCMs). The BBE first establishes a mimic ocean global circulation models (M-OGCMs) data and employs a ORCM to simulate for a highly resolved large domain. This M-OGCM's results were then filtered to remove short scales then used for boundary and initial conditions of the nested ORCMs, which have the same resolution to the M-OGCMs. The various sizes of domain were embedded in the M-OGCMs and the cases were simulated to see the effect of domain size with the extra buffering distance to the results of the ORCMs. The diagnostic variables including temperature, salinity and vorticity of the nested domain are then compared with those of the M-OGCMs before filtering. Differences between them can address the errors associating with the sizes of the domain, which are not attributed unambiguously to models errors or observational errors. The results showed that domain size significantly impacts on the results of ORCMs. As the domain size of the ORCM becomes lager, the distance of the extra space between the area of interest and the updated LBCs increases. So, the results of ORCMs perform more highly correlated with the M-OGCM. But, there are a certain optimal sizes of the OGCMs, which could be larger than nested ORCMs' domain size from 2 to 10 times, depending on the computational costs. Key words: domain size, error, ocean regional circulation model, Big-Brother Experiment. Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled "Development of integrated estuarine management system" and a National Research Foundation of Korea (NRF) Grant (No. 2015R1A5A 7037372) funded by MSIP of Korea. The authors thank the Integrated Research Institute of Construction and Environmental Engineering of Seoul National University for administrative support.
Error behaviors associated with loss of competency in Alzheimer's disease.
Marson, D C; Annis, S M; McInturff, B; Bartolucci, A; Harrell, L E
1999-12-10
To investigate qualitative behavioral changes associated with declining medical decision-making capacity (competency) in patients with AD. Qualitative measures can yield clinical information about functional changes in neurologic disease not available through quantitative measures. Normal older controls (n = 21) and patients with mild and moderate probable AD (n = 72) were compared using a standardized competency measure and neuropsychological measures. A system of 16 qualitative error scores representing conceptual domains of language, executive dysfunction, affective dysfunction, and compensatory responses was used to analyze errors produced on the competency measure. Patterns of errors were examined across groups. Relationships between error behaviors and competency performance were determined, and neurocognitive correlates of specific error behaviors were identified. AD patients demonstrated more miscomprehension, factual confusion, intrusions, incoherent responses, nonresponsive answers, loss of task, and delegation than controls. Errors in the executive domain (loss of task, nonresponsive answer, and loss of detachment) were key predictors of declining competency performance by AD patients. Neuropsychological analyses in the AD group generally confirmed the conceptual domain assignments of the qualitative scores. Loss of task, nonresponsive answers, and loss of detachment were key behavioral changes associated with declining competency of AD patients and with neurocognitive measures of executive dysfunction. These findings support the growing linkage between executive dysfunction and competency loss.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.
1987-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.
NASA Astrophysics Data System (ADS)
Zakeri, Zeinab; Azadi, Majid; Ghader, Sarmad
2018-01-01
Satellite radiances and in-situ observations are assimilated through Weather Research and Forecasting Data Assimilation (WRFDA) system into Advanced Research WRF (ARW) model over Iran and its neighboring area. Domain specific background error based on x and y components of wind speed (UV) control variables is calculated for WRFDA system and some sensitivity experiments are carried out to compare the impact of global background error and the domain specific background errors, both on the precipitation and 2-m temperature forecasts over Iran. Three precipitation events that occurred over the country during January, September and October 2014 are simulated in three different experiments and the results for precipitation and 2-m temperature are verified against the verifying surface observations. Results show that using domain specific background error improves 2-m temperature and 24-h accumulated precipitation forecasts consistently, while global background error may even degrade the forecasts compared to the experiments without data assimilation. The improvement in 2-m temperature is more evident during the first forecast hours and decreases significantly as the forecast length increases.
An audit on the reporting of critical results in a tertiary institute.
Rensburg, Megan A; Nutt, Louise; Zemlin, Annalise E; Erasmus, Rajiv T
2009-03-01
Critical result reporting is a requirement for accreditation by accreditation bodies worldwide. Accurate, prompt communication of results to the clinician by the laboratory is of extreme importance. Repeating of the critical result by the recipient has been used as a means to improve the accuracy of notification. Our objective was to assess the accuracy of notification of critical chemical pathology laboratory results telephoned out to clinicians/clinical areas. We hypothesize that read-back of telephoned critical laboratory results by the recipient may improve the accuracy of the notification. This was a prospective study, where all critical results telephoned by chemical pathologists and registrars at Tygerberg Hospital were monitored for one month. The recipient was required to repeat the result (patient name, folder number and test results). Any error, as well as the designation of the recipient was logged. Of 472 outgoing telephone calls, 51 errors were detected (error rate 10.8%). Most errors were made when recording the folder number (64.7%), with incorrect patient name being the lowest (5.9%). Calls to the clinicians had the highest error rate (20%), most of them being the omission of recording folder numbers. Our audit highlights the potential errors during the post-analytical phase of laboratory testing. The importance of critical result reporting is still poorly recognized in South Africa. Implementation of a uniform accredited practice for communication of critical results can reduce error and improve patient safety.
A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.
1988-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.
In search of periodic signatures in IGS REPRO1 solution
NASA Astrophysics Data System (ADS)
Mtamakaya, J. D.; Santos, M. C.; Craymer, M. R.
2010-12-01
We have been looking for periodic signatures in the REPRO1 solution recently released by the IGS. At this stage, a selected sub-set of IGS station time series in position and residual domain are under harmonic analysis. We can learn different things from this analysis. From the position domain, we can learn more about actual station motions. From the residual domain, we can learn more about mis-modelled or un-modelled errors. As far as error sources are concerned, we have investigated effects that may be due to tides, atmospheric loading, definition of the position of the figure axis and GPS constellation geometry. This poster presents and discusses our findings and presents insights on errors that need to be modelled or have their models improved.
Astigmatism error modification for absolute shape reconstruction using Fourier transform method
NASA Astrophysics Data System (ADS)
He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun
2014-12-01
A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.
Lost in Translation: the Case for Integrated Testing
NASA Technical Reports Server (NTRS)
Young, Aaron
2017-01-01
The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.
Cognitive and behavioral knowledge about insulin-dependent diabetes among children and parents.
Johnson, S B; Pollak, R T; Silverstein, J H; Rosenbloom, A L; Spillar, R; McCallum, M; Harkavy, J
1982-06-01
Youngster's knowledge about insulin-dependent diabetes was assessed across three domains: (1) general information; (2) problem solving and (3) skill at urine testing and self-injection. These youngster's parents completed the general information and problem-solving components of the assessment battery. All test instruments were showed good reliability. The test of problem solving was more difficult than the test of general information for both parents and patients. Mothers were more knowledgeable than fathers and children. Girls performed more accurately than boys, and older children obtained better scores than did younger children. Nevertheless, more than 80% of the youngsters made significant errors on urine testing and almost 40% made serious errors in self-injection. A number of other knowledge deficits were also noted. Duration of diabetes was not related to any of the knowledge measures. Intercorrelations between scores on the assessment instruments indicated that skill at urine testing or self-injection was not highly related to other types of knowledge about diabetes. Furthermore, knowledge in one content are was not usually predictive of knowledge in another content area. The results of this study emphasize the importance of measuring knowledge from several different domains. Patient variables such as sex and age need to be given further consideration in the development and use of patient educational programs. Regular assessment of patients' and parents' knowledge of all critical aspects of diabetes home management seems essential.
Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.
ERIC Educational Resources Information Center
Hoppe, H. Ulrich
1994-01-01
Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)
Personal protective equipment for the Ebola virus disease: A comparison of 2 training programs.
Casalino, Enrique; Astocondor, Eugenio; Sanchez, Juan Carlos; Díaz-Santana, David Enrique; Del Aguila, Carlos; Carrillo, Juan Pablo
2015-12-01
Personal protective equipment (PPE) for preventing Ebola virus disease (EVD) includes basic PPE (B-PPE) and enhanced PPE (E-PPE). Our aim was to compare conventional training programs (CTPs) and reinforced training programs (RTPs) on the use of B-PPE and E-PPE. Four groups were created, designated CTP-B, CTP-E, RTP-B, and RTP-E. All groups received the same theoretical training, followed by 3 practical training sessions. A total of 120 students were included (30 per group). In all 4 groups, the frequency and number of total errors and critical errors decreased significantly over the course of the training sessions (P < .01). The RTP was associated with a greater reduction in the number of total errors and critical errors (P < .0001). During the third training session, we noted an error frequency of 7%-43%, a critical error frequency of 3%-40%, 0.3-1.5 total errors, and 0.1-0.8 critical errors per student. The B-PPE groups had the fewest errors and critical errors (P < .0001). Our results indicate that both training methods improved the student's proficiency, that B-PPE appears to be easier to use than E-PPE, that the RTP achieved better proficiency for both PPE types, and that a number of students are still potentially at risk for EVD contamination despite the improvements observed during the training. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
A comparison of methods for DPLL loop filter design
NASA Technical Reports Server (NTRS)
Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.
1986-01-01
Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.
Black, Anne C; Serowik, Kristin L; Ablondi, Karen M; Rosen, Marc I
2013-01-01
The need for accurate and reliable information about income and resources available to individuals with psychiatric disabilities is critical for the assessment of need and evaluation of programs designed to alleviate financial hardship or affect finance allocation. Measurement of finances is ubiquitous in studies of economics, poverty, and social services. However, evidence has demonstrated that these measures often contain error. We compare the 1-week test-retest reliability of income and finance data from 24 adult psychiatric outpatients using assessment-as-usual (AAU) and a new instrument, the Timeline Historical Review of Income and Financial Transactions (THRIFT). Reliability estimates obtained with the THRIFT for Income (0.77), Expenses (0.91), and Debt (0.99) domains were significantly better than those obtained with AAU. Reliability estimates for Balance did not differ. THRIFT reduced measurement error and provided more reliable information than AAU for assessment of personal finances in psychiatric patients receiving Social Security benefits. The instrument also may be useful with other low-income groups.
A post-processing algorithm for time domain pitch trackers
NASA Astrophysics Data System (ADS)
Specker, P.
1983-01-01
This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.
Accuracy of image-guided surgical navigation using near infrared (NIR) optical tracking
NASA Astrophysics Data System (ADS)
Jakubovic, Raphael; Farooq, Hamza; Alarcon, Joseph; Yang, Victor X. D.
2015-03-01
Spinal surgery is particularly challenging for surgeons, requiring a high level of expertise and precision without being able to see beyond the surface of the bone. Accurate insertion of pedicle screws is critical considering perforation of the pedicle can result in profound clinical consequences including spinal cord, nerve root, arterial injury, neurological deficits, chronic pain, and/or failed back syndrome. Various navigation systems have been designed to guide pedicle screw fixation. Computed tomography (CT)-based image guided navigation systems increase the accuracy of screw placement allowing for 3- dimensional visualization of the spinal anatomy. Current localization techniques require extensive preparation and introduce spatial deviations. Use of near infrared (NIR) optical tracking allows for realtime navigation of the surgery by utilizing spectral domain multiplexing of light, greatly enhancing the surgeon's situation awareness in the operating room. While the incidence of pedicle screw perforation and complications have been significantly reduced with the introduction of modern navigational technologies, some error exists. Several parameters have been suggested including fiducial localization and registration error, target registration error, and angular deviation. However, many of these techniques quantify error using the pre-operative CT and an intra-operative screenshot without assessing the true screw trajectory. In this study we quantified in-vivo error by comparing the true screw trajectory to the intra-operative trajectory. Pre- and post- operative CT as well as intra-operative screenshots were obtained for a cohort of patients undergoing spinal surgery. We quantified entry point error and angular deviation in the axial and sagittal planes.
A comparison of serial order short-term memory effects across verbal and musical domains.
Gorin, Simon; Mengal, Pierre; Majerus, Steve
2018-04-01
Recent studies suggest that the mechanisms involved in the short-term retention of serial order information may be shared across short-term memory (STM) domains such as verbal and visuospatial STM. Given the intrinsic sequential organization of musical material, the study of STM for musical information may be particularly informative about serial order retention processes and their domain-generality. The present experiment examined serial order STM for verbal and musical sequences in participants with no advanced musical expertise and experienced musicians. Serial order STM for verbal information was assessed via a serial order reconstruction task for digit sequences. In the musical domain, serial order STM was assessed using a novel melodic sequence reconstruction task maximizing the retention of tone order information. We observed that performance for the verbal and musical tasks was characterized by sequence length as well as primacy and recency effects. Serial order errors in both tasks were characterized by similar transposition gradients and ratios of fill-in:infill errors. These effects were observed for both participant groups, although the transposition gradients and ratios of fill-in:infill errors showed additional specificities for musician participants in the musical task. The data support domain-general serial order STM effects but also suggest the existence of additional domain-specific effects. Implications for models of serial order STM in verbal and musical domains are discussed.
TADtool: visual parameter identification for TAD-calling algorithms.
Kruse, Kai; Hug, Clemens B; Hernández-Rodríguez, Benjamín; Vaquerizas, Juan M
2016-10-15
Eukaryotic genomes are hierarchically organized into topologically associating domains (TADs). The computational identification of these domains and their associated properties critically depends on the choice of suitable parameters of TAD-calling algorithms. To reduce the element of trial-and-error in parameter selection, we have developed TADtool: an interactive plot to find robust TAD-calling parameters with immediate visual feedback. TADtool allows the direct export of TADs called with a chosen set of parameters for two of the most common TAD calling algorithms: directionality and insulation index. It can be used as an intuitive, standalone application or as a Python package for maximum flexibility. TADtool is available as a Python package from GitHub (https://github.com/vaquerizaslab/tadtool) or can be installed directly via PyPI, the Python package index (tadtool). kai.kruse@mpi-muenster.mpg.de, jmv@mpi-muenster.mpg.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Incidence of speech recognition errors in the emergency department.
Goss, Foster R; Zhou, Li; Weiner, Scott G
2016-09-01
Physician use of computerized speech recognition (SR) technology has risen in recent years due to its ease of use and efficiency at the point of care. However, error rates between 10 and 23% have been observed, raising concern about the number of errors being entered into the permanent medical record, their impact on quality of care and medical liability that may arise. Our aim was to determine the incidence and types of SR errors introduced by this technology in the emergency department (ED). Level 1 emergency department with 42,000 visits/year in a tertiary academic teaching hospital. A random sample of 100 notes dictated by attending emergency physicians (EPs) using SR software was collected from the ED electronic health record between January and June 2012. Two board-certified EPs annotated the notes and conducted error analysis independently. An existing classification schema was adopted to classify errors into eight errors types. Critical errors deemed to potentially impact patient care were identified. There were 128 errors in total or 1.3 errors per note, and 14.8% (n=19) errors were judged to be critical. 71% of notes contained errors, and 15% contained one or more critical errors. Annunciation errors were the highest at 53.9% (n=69), followed by deletions at 18.0% (n=23) and added words at 11.7% (n=15). Nonsense errors, homonyms and spelling errors were present in 10.9% (n=14), 4.7% (n=6), and 0.8% (n=1) of notes, respectively. There were no suffix or dictionary errors. Inter-annotator agreement was 97.8%. This is the first estimate at classifying speech recognition errors in dictated emergency department notes. Speech recognition errors occur commonly with annunciation errors being the most frequent. Error rates were comparable if not lower than previous studies. 15% of errors were deemed critical, potentially leading to miscommunication that could affect patient care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Hou, Tingjun; Zhang, Wei; Case, David A; Wang, Wei
2008-02-29
Many important protein-protein interactions are mediated by peptide recognition modular domains, such as the Src homology 3 (SH3), SH2, PDZ, and WW domains. Characterizing the interaction interface of domain-peptide complexes and predicting binding specificity for modular domains are critical for deciphering protein-protein interaction networks. Here, we propose the use of an energetic decomposition analysis to characterize domain-peptide interactions and the molecular interaction energy components (MIECs), including van der Waals, electrostatic, and desolvation energy between residue pairs on the binding interface. We show a proof-of-concept study on the amphiphysin-1 SH3 domain interacting with its peptide ligands. The structures of the human amphiphysin-1 SH3 domain complexed with 884 peptides were first modeled using virtual mutagenesis and optimized by molecular mechanics (MM) minimization. Next, the MIECs between domain and peptide residues were computed using the MM/generalized Born decomposition analysis. We conducted two types of statistical analyses on the MIECs to demonstrate their usefulness for predicting binding affinities of peptides and for classifying peptides into binder and non-binder categories. First, combining partial least squares analysis and genetic algorithm, we fitted linear regression models between the MIECs and the peptide binding affinities on the training data set. These models were then used to predict binding affinities for peptides in the test data set; the predicted values have a correlation coefficient of 0.81 and an unsigned mean error of 0.39 compared with the experimentally measured ones. The partial least squares-genetic algorithm analysis on the MIECs revealed the critical interactions for the binding specificity of the amphiphysin-1 SH3 domain. Next, a support vector machine (SVM) was employed to build classification models based on the MIECs of peptides in the training set. A rigorous training-validation procedure was used to assess the performances of different kernel functions in SVM and different combinations of the MIECs. The best SVM classifier gave satisfactory predictions for the test set, indicated by average prediction accuracy rates of 78% and 91% for the binding and non-binding peptides, respectively. We also showed that the performance of our approach on both binding affinity prediction and binder/non-binder classification was superior to the performances of the conventional MM/Poisson-Boltzmann solvent-accessible surface area and MM/generalized Born solvent-accessible surface area calculations. Our study demonstrates that the analysis of the MIECs between peptides and the SH3 domain can successfully characterize the binding interface, and it provides a framework to derive integrated prediction models for different domain-peptide systems.
Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.
Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan
2018-04-01
In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.
Driving errors of learner teens: frequency, nature and their association with practice.
Durbin, Dennis R; Mirman, Jessica H; Curry, Allison E; Wang, Wenli; Fisher Thiel, Megan C; Schultheis, Maria; Winston, Flaura K
2014-11-01
Despite demonstrating basic vehicle operations skills sufficient to pass a state licensing test, novice teen drivers demonstrate several deficits in tactical driving skills during the first several months of independent driving. Improving our knowledge of the types of errors made by teen permit holders early in the learning process would assist in the development of novel approaches to driver training and resources for parent supervision. The purpose of the current analysis was to describe driving performance errors made by teens during the permit period, and to determine if there were differences in the frequency and type of errors made by teens: (1) in comparison to licensed, safe, and experienced adult drivers; (2) by teen and parent-supervisor characteristics; and (3) by teen-reported quantity of practice driving. Data for this analysis were combined from two studies: (1) the control group of teens in a randomized clinical trial evaluating an intervention to improve parent-supervised practice driving (n=89 parent-teen dyads) and (2) a sample of 37 adult drivers (mean age 44.2 years), recruited and screened as an experienced and competent reference standard in a validation study of an on-road driving assessment for teens (tODA). Three measures of performance: drive termination (i.e., the assessment was discontinued for safety reasons), safety-relevant critical errors, and vehicle operation errors were evaluated at the approximate mid-point (12 weeks) and end (24 weeks) of the learner phase. Differences in driver performance were compared using the Wilcoxon rank sum test for continuous variables and Pearson's Chi-square test for categorical variables. 10.4% of teens had their early assessment terminated for safety reasons and 15.4% had their late assessment terminated, compared to no adults. These teens reported substantially fewer behind the wheel practice hours compared with teens that did not have their assessments terminated: tODAearly (9.0 vs. 20.0, p<0.001) and tODAlate (19.0 vs. 58.3, p<0.001). With respect to critical driving errors, 55% of teens committed a total of 85 critical errors (range of 1-5 errors per driver) on the early tODA; by comparison, only one adult committed a critical error (p<0.001). On the late tODA, 54% of teens committed 67 critical errors (range of 1-8 errors per driver) compared with only one adult (p<0.001). No differences in teen or parent gender, parent/teen relationship type or parent prior experience teaching a teen to drive were observed between teens who committed a critical error on either route and teens that committed no critical errors. A borderline association between median teen-reported practice quantity and critical error commission was observed for the late tODA. The overall median proportion of vehicle operation errors for teens was higher than that of adults on both assessments, though median error proportions were less than 10% for both teens and adults. In comparison to a group of experienced adult drivers, a substantially higher proportion of learner teens committed safety-relevant critical driving errors at both time points of assessment. These findings, as well as the associations between practice quantity and the driving performance outcomes studied suggest that further research is needed to better understand how teens might effectively learn skills necessary for safe independent driving while they are still under supervised conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
Medicine and aviation: a review of the comparison.
Randell, R
2003-01-01
This paper aims to understand the nature of medical error in highly technological environments and argues that a comparison with aviation can blur its real understanding. This study is a comparative study between the notion of error in health care and aviation based on the author's own ethnographic study in intensive care units and findings from the research literature on errors in aviation. Failures in the use of medical technology are common. In attempts to understand the area of medical error, much attention has focused on how we can learn from aviation. This paper argues that such a comparison is not always useful, on the basis that (i) the type of work and technology is very different in the two domains; (ii) different issues are involved in training and procurement; and (iii) attitudes to error vary between the domains. Therefore, it is necessary to look closely at the subject of medical error and resolve those questions left unanswered by the lessons of aviation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Vay, J. -L.
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
The use of source memory to identify one's own episodic confusion errors.
Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R
2001-03-01
In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.
V & V Within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and validation (V&V) is used to increase the level of assurance of critical software, particularly that of safety-critical and mission critical software. This paper describes the working group's success in identifying V&V tasks that could be performed in the domain engineering and transition levels of reuse-based software engineering. The primary motivation for V&V at the domain level is to provide assurance that the domain requirements are correct and that the domain artifacts correctly implement the domain requirements. A secondary motivation is the possible elimination of redundant V&V activities at the application level. The group also considered the criteria and motivation for performing V&V in domain engineering.
Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B
2014-09-29
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable.
Leach, Julia M.; Mancini, Martina; Peterka, Robert J.; Hayes, Tamara L.; Horak, Fay B.
2014-01-01
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the “gold standard” laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2–6 mm (before calibration) to 0.5–2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from −10.5% (before calibration) to −0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable. PMID:25268919
Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1993-01-01
This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.
ERIC Educational Resources Information Center
Tiruneh, Dawit Tibebu; Weldeslassie, Ataklti G.; Kassa, Abrham; Tefera, Zinaye; De Cock, Mieke; Elen, Jan
2016-01-01
Identifying effective instructional approaches that stimulate students' critical thinking (CT) has been the focus of a large body of empirical research. However, there is little agreement on the instructional principles and procedures that are theoretically sound and empirically valid to developing both domain-specific and domain-general CT…
Elhanan, Gai; Ochs, Christopher; Mejino, Jose L V; Liu, Hao; Mungall, Christopher J; Perl, Yehoshua
2017-06-01
To examine whether disjoint partial-area taxonomy, a semantically-based evaluation methodology that has been successfully tested in SNOMED CT, will perform with similar effectiveness on Uberon, an anatomical ontology that belongs to a structurally similar family of ontologies as SNOMED CT. A disjoint partial-area taxonomy was generated for Uberon. One hundred randomly selected test concepts that overlap between partial-areas were matched to a same size control sample of non-overlapping concepts. The samples were blindly inspected for non-critical issues and presumptive errors first by a general domain expert whose results were then confirmed or rejected by a highly experienced anatomical ontology domain expert. Reported issues were subsequently reviewed by Uberon's curators. Overlapping concepts in Uberon's disjoint partial-area taxonomy exhibited a significantly higher rate of all issues. Clear-cut presumptive errors trended similarly but did not reach statistical significance. A sub-analysis of overlapping concepts with three or more relationship types indicated a much higher rate of issues. Overlapping concepts from Uberon's disjoint abstraction network are quite likely (up to 28.9%) to exhibit issues. The results suggest that the methodology can transfer well between same family ontologies. Although Uberon exhibited relatively few overlapping concepts, the methodology can be combined with other semantic indicators to expand the process to other concepts within the ontology that will generate high yields of discovered issues. Copyright © 2017 Elsevier B.V. All rights reserved.
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Meurier, C E
2000-07-01
Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.
ERIC Educational Resources Information Center
Tiruneh, Dawit Tibebu; De Cock, Mieke; Weldeslassie, Ataklti G.; Elen, Jan; Janssen, Rianne
2017-01-01
Although the development of critical thinking (CT) is a major goal of science education, adequate emphasis has not been given to the measurement of CT skills in specific science domains such as physics. Recognizing that adequately assessing CT implies the assessment of both domain-specific and domain-general CT skills, this study reports on the…
Superfluid in a shaken optical lattice: quantum critical dynamics and topological defect engineering
NASA Astrophysics Data System (ADS)
Gaj, Anita; Feng, Lei; Clark, Logan W.; Chin, Cheng
2017-04-01
We present our recent studies of non-equilibrium dynamics in Bose-Einstein condensates using the shaken optical lattice. By increasing the shaking amplitude we observe a quantum phase transition from an ordinary superfluid to an effectively ferromagnetic superfluid composed of discrete domains with different quasi-momentum. We investigate the critical dynamics during which the domain structure and domain walls emerge. We demonstrate the use of a digital micromirror device to deterministically create desired domain structure. Using this technique we develop a clearer picture of the quantum critical dynamics at early times and its impact on the domain structure long after the transition.
The Errors Sources Affect to the Results of One-Way Nested Ocean Regional Circulation Model
NASA Astrophysics Data System (ADS)
Pham, S. V.
2016-02-01
Pham-Van Sy1, Jin Hwan Hwang2 and Hyeyun Ku3 Dept. of Civil & Environmental Engineering, Seoul National University, KoreaEmail: 1phamsymt@gmail.com (Corresponding author) Email: 2jinhwang@snu.ac.krEmail: 3hyeyun.ku@gmail.comAbstractThe Oceanic Regional Circulation Model (ORCM) is an essential tool in resolving highly a regional scale through downscaling dynamically the results from the roughly revolved global model. However, when dynamic downscaling from a coarse resolution of the global model or observations to the small scale, errors are generated due to the different sizes of resolution and lateral updating frequency. This research evaluated the effect of four main sources on the results of the ocean regional circulation model (ORCMs) during downscaling and nesting the output data from the ocean global circulation model (OGCMs). Representative four error sources should be the way of the LBC formulation, the spatial resolution difference between driving and driven data, the frequency for up-dating LBCs and domain size. Errors which are contributed from each error source to the results of the ORCMs are investigated separately by applying the Big-Brother Experiment (BBE). Within resolution of 3km grid point of the ORCMs imposing in the BBE framework, it clearly exposes that the simulation results of the ORCMs significantly depend on the domain size and specially the spatial and temporal resolution of lateral boundary conditions (LBCs). The ratio resolution of spatial resolution between driving data and driven model could be up to 3, the updating frequency of the LBCs can be up to every 6 hours per day. The optimal domain size of the ORCMs could be smaller than the OGCMs' domain size around 2 to 10 times. Key words: ORCMs, error source, lateral boundary conditions, domain size Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled as "Developing total management system for the Keum river estuary and coast" and "Development of Technology for CO2 Marine Geological Storage". We also thank to the administrative supports of the Integrated Research Institute of Construction and Environmental Engineering of the Seoul National University.
Specification-based software sizing: An empirical investigation of function metrics
NASA Technical Reports Server (NTRS)
Jeffery, Ross; Stathis, John
1993-01-01
For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.
Dysfunctional error-related processing in female psychopathy
Steele, Vaughn R.; Edwards, Bethany G.; Bernat, Edward M.; Calhoun, Vince D.; Kiehl, Kent A.
2016-01-01
Neurocognitive studies of psychopathy have predominantly focused on male samples. Studies have shown that female psychopaths exhibit similar affective deficits as their male counterparts, but results are less consistent across cognitive domains including response modulation. As such, there may be potential gender differences in error-related processing in psychopathic personality. Here we investigate response-locked event-related potential (ERP) components [the error-related negativity (ERN/Ne) related to early error-detection processes and the error-related positivity (Pe) involved in later post-error processing] in a sample of incarcerated adult female offenders (n = 121) who performed a response inhibition Go/NoGo task. Psychopathy was assessed using the Hare Psychopathy Checklist-Revised (PCL-R). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Consistent with previous research performed in psychopathic males, female psychopaths exhibited specific deficiencies in the neural correlates of post-error processing (as indexed by reduced Pe amplitude) but not in error monitoring (as indexed by intact ERN/Ne amplitude). Specifically, psychopathic traits reflecting interpersonal and affective dysfunction remained significant predictors of both time-domain and PCA measures reflecting reduced Pe mean amplitude. This is the first evidence to suggest that incarcerated female psychopaths exhibit similar dysfunctional post-error processing as male psychopaths. PMID:26060326
ERIC Educational Resources Information Center
O'Connell, Ann Aileen
The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…
Fail Better: Toward a Taxonomy of E-Learning Error
ERIC Educational Resources Information Center
Priem, Jason
2010-01-01
The study of student error, important across many fields of educational research, has begun to attract interest in the field of e-learning, particularly in relation to usability. However, it remains unclear when errors should be avoided (as usability failures) or embraced (as learning opportunities). Many domains have benefited from taxonomies of…
NASA Astrophysics Data System (ADS)
Eppenhof, Koen A. J.; Pluim, Josien P. W.
2017-02-01
Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.
NASA Astrophysics Data System (ADS)
Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.
Vincenti, H.; Vay, J. -L.
2015-11-22
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G
2018-01-01
The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2009-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
Fluency and belief bias in deductive reasoning: new indices for old effects
Trippas, Dries; Handley, Simon J.; Verde, Michael F.
2014-01-01
Models based on signal detection theory (SDT) have occupied a prominent role in domains such as perception, categorization, and memory. Recent work by Dube et al. (2010) suggests that the framework may also offer important insights in the domain of deductive reasoning. Belief bias in reasoning has traditionally been examined using indices based on raw endorsement rates—indices that critics have claimed are highly problematic. We discuss a new set of SDT indices fit for the investigation belief bias and apply them to new data examining the effect of perceptual disfluency on belief bias in syllogisms. In contrast to the traditional approach, the SDT indices do not violate important statistical assumptions, resulting in a decreased Type 1 error rate. Based on analyses using these novel indices we demonstrate that perceptual disfluency leads to decreased reasoning accuracy, contrary to predictions. Disfluency also appears to eliminate the typical link found between cognitive ability and the effect of beliefs on accuracy. Finally, replicating previous work, we demonstrate that cognitive ability leads to an increase in reasoning accuracy and a decrease in the response bias component of belief bias. PMID:25009515
Forecasting volcanic air pollution in Hawaii: Tests of time series models
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2012-12-01
Volcanic air pollution, known as vog (volcanic smog) has recently become a major issue in the Hawaiian islands. Vog is caused when volcanic gases react with oxygen and water vapor. It consists of a mixture of gases and aerosols, which include sulfur dioxide and other sulfates. The source of the volcanic gases is the continuing eruption of Mount Kilauea. This paper studies predicting vog using statistical methods. The data sets include time series for SO2 and SO4, over locations spanning the west, south and southeast coasts of Hawaii, and the city of Hilo. The forecasting models include regressions and neural networks, and a frequency domain algorithm. The most typical pattern for the SO2 data is for the frequency domain method to yield the most accurate forecasts over the first few hours, and at the 24 h horizon. The neural net places second. For the SO4 data, the results are less consistent. At two sites, the neural net generally yields the most accurate forecasts, except at the 1 and 24 h horizons, where the frequency domain technique wins narrowly. At one site, the neural net and the frequency domain algorithm yield comparable errors over the first 5 h, after which the neural net dominates. At the remaining site, the frequency domain method is more accurate over the first 4 h, after which the neural net achieves smaller errors. For all the series, the average errors are well within one standard deviation of the actual data at all the horizons. However, the errors also show irregular outliers. In essence, the models capture the central tendency of the data, but are less effective in predicting the extreme events.
Prediction of final error level in learning and repetitive control
NASA Astrophysics Data System (ADS)
Levoci, Peter A.
Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.
Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.
Liu, Siwei; Molenaar, Peter
2016-01-01
This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Debono, Deborah; Taylor, Natalie; Lipworth, Wendy; Greenfield, David; Travaglia, Joanne; Black, Deborah; Braithwaite, Jeffrey
2017-03-27
Medication errors harm hospitalised patients and increase health care costs. Electronic Medication Management Systems (EMMS) have been shown to reduce medication errors. However, nurses do not always use EMMS as intended, largely because implementation of such patient safety strategies requires clinicians to change their existing practices, routines and behaviour. This study uses the Theoretical Domains Framework (TDF) to identify barriers and targeted interventions to enhance nurses' appropriate use of EMMS in two Australian hospitals. This qualitative study draws on in-depth interviews with 19 acute care nurses who used EMMS. A convenience sampling approach was used. Nurses working on the study units (N = 6) in two hospitals were invited to participate if available during the data collection period. Interviews inductively explored nurses' experiences of using EMMS (step 1). Data were analysed using the TDF to identify theory-derived barriers to nurses' appropriate use of EMMS (step 2). Relevant behaviour change techniques (BCTs) were identified to overcome key barriers to using EMMS (step 3) followed by the identification of potential literature-informed targeted intervention strategies to operationalise the identified BCTs (step 4). Barriers to nurses' use of EMMS in acute care were represented by nine domains of the TDF. Two closely linked domains emerged as major barriers to EMMS use: Environmental Context and Resources (availability and properties of computers on wheels (COWs); technology characteristics; specific contexts; competing demands and time pressure) and Social/Professional Role and Identity (conflict between using EMMS appropriately and executing behaviours critical to nurses' professional role and identity). The study identified three potential BCTs to address the Environmental Context and Resources domain barrier: adding objects to the environment; restructuring the physical environment; and prompts and cues. Seven BCTs to address Social/Professional Role and Identity were identified: social process of encouragement; pressure or support; information about others' approval; incompatible beliefs; identification of self as role model; framing/reframing; social comparison; and demonstration of behaviour. It proposes several targeted interventions to deliver these BCTs. The TDF provides a useful approach to identify barriers to nurses' prescribed use of EMMS, and can inform the design of targeted theory-based interventions to improve EMMS implementation.
Test-Enhanced Learning in Competence-Based Predoctoral Orthodontics: A Four-Year Study.
Freda, Nicolas M; Lipp, Mitchell J
2016-03-01
Dental educators intend to promote integration of knowledge, skills, and values toward professional competence. Studies report that retrieval, in the form of testing, results in better learning with retention than traditional studying. The aim of this study was to evaluate test-enhanced experiences on demonstrations of competence in diagnosis and management of malocclusion and skeletal problems. The study participants were all third-year dental students (2011 N=88, 2012 N=74, 2013 N=91, 2014 N=85) at New York University College of Dentistry. The 2013 and 2014 groups received the test-enhanced method emphasizing formative assessments with written and dialogic delayed feedback, while the 2011 and 2012 groups received the traditional approach emphasizing lectures and classroom exercises. The students received six two-hour sessions, spaced one week apart. At the final session, a summative assessment consisting of the same four cases was administered. Students constructed a problem list, treatment objectives, and a treatment plan for each case, scored according to the same criteria. Grades were based on the number of cases without critical errors: A=0 critical errors on four cases, A-=0 critical errors on three cases, B+=0 critical errors on two cases, B=0 critical errors on one case, F=critical errors on four cases. Performance grades were categorized as high quality (B+, A-, A) and low quality (F, B). The results showed that the test-enhanced groups demonstrated statistically significant benefits at 95% confidence intervals compared to the traditional groups when comparing low- and high-quality grades. These performance trends support the continued use of the test-enhanced approach.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Roth, Dan
2013-01-01
Objective This paper presents a coreference resolution system for clinical narratives. Coreference resolution aims at clustering all mentions in a single document to coherent entities. Materials and methods A knowledge-intensive approach for coreference resolution is employed. The domain knowledge used includes several domain-specific lists, a knowledge intensive mention parsing, and task informed discourse model. Mention parsing allows us to abstract over the surface form of the mention and represent each mention using a higher-level representation, which we call the mention's semantic representation (SR). SR reduces the mention to a standard form and hence provides better support for comparing and matching. Existing coreference resolution systems tend to ignore discourse aspects and rely heavily on lexical and structural cues in the text. The authors break from this tradition and present a discourse model for “person” type mentions in clinical narratives, which greatly simplifies the coreference resolution. Results This system was evaluated on four different datasets which were made available in the 2011 i2b2/VA coreference challenge. The unweighted average of F1 scores (over B-cubed, MUC and CEAF) varied from 84.2% to 88.1%. These experiments show that domain knowledge is effective for different mention types for all the datasets. Discussion Error analysis shows that most of the recall errors made by the system can be handled by further addition of domain knowledge. The precision errors, on the other hand, are more subtle and indicate the need to understand the relations in which mentions participate for building a robust coreference system. Conclusion This paper presents an approach that makes an extensive use of domain knowledge to significantly improve coreference resolution. The authors state that their system and the knowledge sources developed will be made publicly available. PMID:22781192
Challenge and Error: Critical Events and Attention-Related Errors
ERIC Educational Resources Information Center
Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel
2011-01-01
Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…
Canadian and Japanese Teachers' Conceptions of Critical Thinking: A Comparative Study
ERIC Educational Resources Information Center
Howe, Edward R.
2004-01-01
Canadian and Japanese secondary teachers' conceptions of critical thinking were compared and contrasted. Significant cross-cultural differences were found. While Canadian teachers tended to relate critical thinking to the cognitive domain, Japanese teachers emphasized the affective domain. The quantitative data, effectively reduced through factor…
Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; M.A. Pope; R.M. Ferrer
2010-10-01
The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL’s current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in themore » NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2–3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
NASA Technical Reports Server (NTRS)
Lansing, Faiza S.; Rascoe, Daniel L.
1993-01-01
This paper presents a modified Finite-Difference Time-Domain (FDTD) technique using a generalized conformed orthogonal grid. The use of the Conformed Orthogonal Grid, Finite Difference Time Domain (GFDTD) enables the designer to match all the circuit dimensions, hence eliminating a major source o error in the analysis.
Binns-Calvey, Amy E; Malhiot, Alex; Kostovich, Carol T; LaVela, Sherri L; Stroupe, Kevin; Gerber, Ben S; Burkhart, Lisa; Weiner, Saul J; Weaver, Frances M
2017-09-01
"Patient context" indicates patient circumstances and characteristics or states that are essential to address when planning patient care. Specific patient "contextual factors," if overlooked, result in an inappropriate plan of care, a medical error termed a "contextual error." The myriad contextual factors that constitute patient context have been grouped into broad domains to create a taxonomy of challenges to consider when planning care. This study sought to validate a previously identified list of contextual domains. This qualitative study used directed content analysis. In 2014, 19 Department of Veterans Affairs (VA) providers (84% female) and 49 patients (86% male) from two VA medical centers and four outpatient clinics in the Chicago area participated in semistructured interviews and focus groups. Topics included patient-specific, community, and resource-related factors that affect patients' abilities to manage their care. Transcripts were analyzed with a previously identified list of contextual domains as a framework. Analysis of responses revealed that patients and providers identified the same 10 domains previously published, plus 3 additional ones. Based on comments made by patients and providers, the authors created a revised list of 12 domains from themes that emerged. Six pertain to patient circumstances such as access to care and financial situation, and 6 to patient characteristics/states including skills, abilities, and knowledge. Contextual factors in patients' lives may be essential to address for effective care planning. The rubric developed can serve as a "contextual differential" for clinicians to consider when addressing challenges patients face when planning their care.
Analysis of limiting information characteristics of quantum-cryptography protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sych, D V; Grishanin, Boris A; Zadkov, Viktor N
2005-01-31
The problem of increasing the critical error rate of quantum-cryptography protocols by varying a set of letters in a quantum alphabet for space of a fixed dimensionality is studied. Quantum alphabets forming regular polyhedra on the Bloch sphere and the continual alphabet equally including all the quantum states are considered. It is shown that, in the absence of basis reconciliation, a protocol with the tetrahedral alphabet has the highest critical error rate among the protocols considered, while after the basis reconciliation, a protocol with the continual alphabet possesses the highest critical error rate. (quantum optics and quantum computation)
Conical-Domain Model for Estimating GPS Ionospheric Delays
NASA Technical Reports Server (NTRS)
Sparks, Lawrence; Komjathy, Attila; Mannucci, Anthony
2009-01-01
The conical-domain model is a computational model, now undergoing development, for estimating ionospheric delays of Global Positioning System (GPS) signals. Relative to the standard ionospheric delay model described below, the conical-domain model offers improved accuracy. In the absence of selective availability, the ionosphere is the largest source of error for single-frequency users of GPS. Because ionospheric signal delays contribute to errors in GPS position and time measurements, satellite-based augmentation systems (SBASs) have been designed to estimate these delays and broadcast corrections. Several national and international SBASs are currently in various stages of development to enhance the integrity and accuracy of GPS measurements for airline navigation. In the Wide Area Augmentation System (WAAS) of the United States, slant ionospheric delay errors and confidence bounds are derived from estimates of vertical ionospheric delay modeled on a grid at regularly spaced intervals of latitude and longitude. The estimate of vertical delay at each ionospheric grid point (IGP) is calculated from a planar fit of neighboring slant delay measurements, projected to vertical using a standard, thin-shell model of the ionosphere. Interpolation on the WAAS grid enables estimation of the vertical delay at the ionospheric pierce point (IPP) corresponding to any arbitrary measurement of a user. (The IPP of a given user s measurement is the point where the GPS signal ray path intersects a reference ionospheric height.) The product of the interpolated value and the user s thin-shell obliquity factor provides an estimate of the user s ionospheric slant delay. Two types of error that restrict the accuracy of the thin-shell model are absent in the conical domain model: (1) error due to the implicit assumption that the electron density is independent of the azimuthal angle at the IPP and (2) error arising from the slant-to-vertical conversion. At low latitudes or at mid-latitudes under disturbed conditions, the accuracy of SBAS systems based upon the thin-shell model suffers due to the presence of complex ionospheric structure, high delay values, and large electron density gradients. Interpolation on the vertical delay grid serves as an additional source of delay error. The conical-domain model permits direct computation of the user s slant delay estimate without the intervening use of a vertical delay grid. The key is to restrict each fit of GPS measurements to a spatial domain encompassing signals from only one satellite. The conical domain model is so named because each fit involves a group of GPS receivers that all receive signals from the same GPS satellite (see figure); the receiver and satellite positions define a cone, the satellite position being the vertex. A user within a given cone evaluates the delay to the satellite directly, using (1) the IPP coordinates of the line of sight to the satellite and (2) broadcast fit parameters associated with the cone. The conical-domain model partly resembles the thin-shell model in that both models reduce an inherently four-dimensional problem to two dimensions. However, unlike the thin-shell model, the conical domain model does not involve any potentially erroneous simplifying assumptions about the structure of the ionosphere. In the conical domain model, the initially four-dimensional problem becomes truly two-dimensional in the sense that once a satellite location has been specified, any signal path emanating from a satellite can be identified by only two coordinates; for example, the IPP coordinates. As a consequence, a user s slant-delay estimate converges to the correct value in the limit that the receivers converge to the user s location (or, equivalently, in the limit that the measurement IPPs converge to the user s IPP).
Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A
2018-05-01
The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.
Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle
2016-01-01
With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757
Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.
Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei
2014-02-01
Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
A frequency-domain estimator for use in adaptive control systems
NASA Technical Reports Server (NTRS)
Lamaire, Richard O.; Valavani, Lena; Athans, Michael; Stein, Gunter
1991-01-01
This paper presents a frequency-domain estimator that can identify both a parametrized nominal model of a plant as well as a frequency-domain bounding function on the modeling error associated with this nominal model. This estimator, which we call a robust estimator, can be used in conjunction with a robust control-law redesign algorithm to form a robust adaptive controller.
Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S
2009-11-01
Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.
Formulation of a strategy for monitoring control integrity in critical digital control systems
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1991-01-01
Advanced aircraft will require flight critical computer systems for stability augmentation as well as guidance and control that must perform reliably in adverse, as well as nominal, operating environments. Digital system upset is a functional error mode that can occur in electromagnetically harsh environments, involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. A strategy is presented for dynamic upset detection to be used in the evaluation of critical digital controllers during the design and/or validation phases of development. Critical controllers must be able to be used in adverse environments that result from disturbances caused by an electromagnetic source such as lightning, high intensity radiated field (HIRF), and nuclear electromagnetic pulses (NEMP). The upset detection strategy presented provides dynamic monitoring of a given control computer for degraded functional integrity that can result from redundancy management errors and control command calculation error that could occur in an electromagnetically harsh operating environment. The use is discussed of Kalman filtering, data fusion, and decision theory in monitoring a given digital controller for control calculation errors, redundancy management errors, and control effectiveness.
Modeling and analysis of pinhole occulter experiment
NASA Technical Reports Server (NTRS)
Ring, J. R.
1986-01-01
The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio
2016-02-01
The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Infrequent identity mismatches are frequently undetected
Goldinger, Stephen D.
2014-01-01
The ability to quickly and accurately match faces to photographs bears critically on many domains, from controlling purchase of age-restricted goods to law enforcement and airport security. Despite its pervasiveness and importance, research has shown that face matching is surprisingly error prone. The majority of face-matching research is conducted under idealized conditions (e.g., using photographs of individuals taken on the same day) and with equal proportions of match and mismatch trials, a rate that is likely not observed in everyday face matching. In four experiments, we presented observers with photographs of faces taken an average of 1.5 years apart and tested whether face-matching performance is affected by the prevalence of identity mismatches, comparing conditions of low (10 %) and high (50 %) mismatch prevalence. Like the low-prevalence effect in visual search, we observed inflated miss rates under low-prevalence conditions. This effect persisted when participants were allowed to correct their initial responses (Experiment 2), when they had to verify every decision with a certainty judgment (Experiment 3) and when they were permitted “second looks” at face pairs (Experiment 4). These results suggest that, under realistic viewing conditions, the low-prevalence effect in face matching is a large, persistent source of errors. PMID:24500751
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
NASA Astrophysics Data System (ADS)
Tamilarasan, Ilavarasan; Saminathan, Brindha; Murugappan, Meenakshi
2016-04-01
The past decade has seen the phenomenal usage of orthogonal frequency division multiplexing (OFDM) in the wired as well as wireless communication domains, and it is also proposed in the literature as a future proof technique for the implementation of flexible resource allocation in cognitive optical networks. Fiber impairment assessment and adaptive compensation becomes critical in such implementations. A comprehensive analytical model for impairments in OFDM-based fiber links is developed. The proposed model includes the combined impact of laser phase fluctuations, fiber dispersion, self phase modulation, cross phase modulation, four-wave mixing, the nonlinear phase noise due to the interaction of amplified spontaneous emission with fiber nonlinearities, and the photodetector noises. The bit error rate expression for the proposed model is derived based on error vector magnitude estimation. The performance analysis of the proposed model is presented and compared for dispersion compensated and uncompensated backbone/backhaul links. The results suggest that OFDM would perform better for uncompensated links than the compensated links due to the negligible FWM effects and there is a need for flexible compensation. The proposed model can be employed in cognitive optical networks for accurate assessment of fiber-related impairments.
Casing pipe damage detection with optical fiber sensors: a case study in oil well constructions
NASA Astrophysics Data System (ADS)
Zhou, Zhi; He, Jianping; Huang, Minghua; He, Jun; Ou, Jinping; Chen, Genda
2010-04-01
Casing pipes in oil well constructions may suddenly buckle inward as their inside and outside hydrostatic pressure difference increases. For the safety of construction workers and the steady development of oil industries, it is critically important to measure the stress state of a casing pipe. This study develops a rugged, real-time monitoring, and warning system that combines the distributed Brillouin Scattering Time Domain Reflectometry (BOTDR) and the discrete fiber Bragg grating (FBG) measurement. The BOTDR optical fiber sensors were embedded with no optical fiber splice joints in a fiber reinforced polymer (FRP) rebar and the FBG sensors were wrapped in epoxy resins and glass clothes, both installed during the segmental construction of casing pipes. In-situ tests indicate that the proposed sensing system and installation technique can survive the downhole driving process of casing pipes, withstand a harsh service environment, and remain in tact with the casing pipes for compatible strain measurements. The relative error of the measured strains between the distributed and discrete sensors is less than 12%. The FBG sensors successfully measured the maximum horizontal principal stress with a relative error of 6.7% in comparison with a cross multi-pole array acoustic instrument.
Δmix parameter in the overlap on domain-wall mixed action
NASA Astrophysics Data System (ADS)
Lujan, M.; Alexandru, A.; Chen, Y.; Draper, T.; Freeman, W.; Gong, M.; Lee, F. X.; Li, A.; Liu, K. F.; Mathur, N.
2012-07-01
A direct calculation of the mixed action parameter Δmix with valence overlap fermions on a domain-wall fermion sea is presented. The calculation is performed on four ensembles of the 2+1 flavor domain-wall gauge configurations: 243×64 (aml=0.005, a=0.114fm) and 323×64 (aml=0.004, 0.006, 0.008, a=0.085fm). For pion masses close to 300 MeV we find Δmix=0.030(6)GeV4 at a=0.114fm and Δmix=0.033(12)GeV4 at a=0.085fm. The results are quite independent of the lattice spacing and they are significantly smaller than the results for valence domain-wall fermions on asqtad sea or those of valence overlap fermions on clover sea. Combining the results extracted from these two ensembles, we get Δmix=0.030(6)(5)GeV4, where the first error is statistical and the second is the systematic error associated with the fitting method.
NASA Astrophysics Data System (ADS)
De Felice, Matteo; Petitta, Marcello; Ruti, Paolo
2014-05-01
Photovoltaic diffusion is steadily growing on Europe, passing from a capacity of almost 14 GWp in 2011 to 21.5 GWp in 2012 [1]. Having accurate forecast is needed for planning and operational purposes, with the possibility to model and predict solar variability at different time-scales. This study examines the predictability of daily surface solar radiation comparing ECMWF operational forecasts with CM-SAF satellite measurements on the Meteosat (MSG) full disk domain. Operational forecasts used are the IFS system up to 10 days and the System4 seasonal forecast up to three months. Forecast are analysed considering average and variance of errors, showing error maps and average on specific domains with respect to prediction lead times. In all the cases, forecasts are compared with predictions obtained using persistence and state-of-art time-series models. We can observe a wide range of errors, with the performance of forecasts dramatically affected by orography and season. Lower errors are on southern Italy and Spain, with errors on some areas consistently under 10% up to ten days during summer (JJA). Finally, we conclude the study with some insight on how to "translate" the error on solar radiation to error on solar power production using available production data from solar power plants. [1] EurObserver, "Baromètre Photovoltaïque, Le journal des énergies renouvables, April 2012."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven
2015-01-15
A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less
Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency
NASA Astrophysics Data System (ADS)
Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.
2013-09-01
A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.
NASA Astrophysics Data System (ADS)
Paul, Prakash
2009-12-01
The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.
Influenza A virus hemagglutinin glycosylation compensates for antibody escape fitness costs.
Kosik, Ivan; Ince, William L; Gentles, Lauren E; Oler, Andrew J; Kosikova, Martina; Angel, Matthew; Magadán, Javier G; Xie, Hang; Brooke, Christopher B; Yewdell, Jonathan W
2018-01-01
Rapid antigenic evolution enables the persistence of seasonal influenza A and B viruses in human populations despite widespread herd immunity. Understanding viral mechanisms that enable antigenic evolution is critical for designing durable vaccines and therapeutics. Here, we utilize the primerID method of error-correcting viral population sequencing to reveal an unexpected role for hemagglutinin (HA) glycosylation in compensating for fitness defects resulting from escape from anti-HA neutralizing antibodies. Antibody-free propagation following antigenic escape rapidly selected viruses with mutations that modulated receptor binding avidity through the addition of N-linked glycans to the HA globular domain. These findings expand our understanding of the viral mechanisms that maintain fitness during antigenic evolution to include glycan addition, and highlight the immense power of high-definition virus population sequencing to reveal novel viral adaptive mechanisms.
Comparison of universal approximators incorporating partial monotonicity by structure.
Minin, Alexey; Velikova, Marina; Lang, Bernhard; Daniels, Hennie
2010-05-01
Neural networks applied in control loops and safety-critical domains have to meet more requirements than just the overall best function approximation. On the one hand, a small approximation error is required; on the other hand, the smoothness and the monotonicity of selected input-output relations have to be guaranteed. Otherwise, the stability of most of the control laws is lost. In this article we compare two neural network-based approaches incorporating partial monotonicity by structure, namely the Monotonic Multi-Layer Perceptron (MONMLP) network and the Monotonic MIN-MAX (MONMM) network. We show the universal approximation capabilities of both types of network for partially monotone functions. On a number of datasets, we investigate the advantages and disadvantages of these approaches related to approximation performance, training of the model and convergence. 2009 Elsevier Ltd. All rights reserved.
Convergence issues in domain decomposition parallel computation of hovering rotor
NASA Astrophysics Data System (ADS)
Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong
2018-05-01
Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.
Godfrey, Erin B; Grayman, Justina Kamiel
2014-11-01
Building on previous research on critical consciousness and civic development among youth, the current study examined the extent to which an open climate for discussion-one in which controversial issues are openly discussed with respect for all opinions-relates to youth's critical consciousness and whether this association differs for youth from racial/ethnic majority versus minority backgrounds. Critical consciousness consisted of three components: the ability to critically read social conditions (critical reflection), feelings of efficacy to effect change (sociopolitical efficacy) and actual participation in these efforts (critical action), in both the educational and political/community domains. Open classroom climate was operationalized at the classroom rather than individual student level to more accurately draw links to educational policy and practice. Multilevel analyses of the 1999 IEA Civic Education Study, a nationally-representative sample of 2,774 US ninth-graders (50 % female; 58 % white), revealed that an open classroom climate predicted some, but not all, components of critical consciousness. Specifically, open classroom climate was positively related to sociopolitical efficacy in both the educational and political domains and to critical action in the community domain, but was not related to critical reflection. Few differences in these associations were found for youth from racial/ethnic majority versus minority backgrounds. The exception was sociopolitical efficacy in the educational domain: open classroom climate was particularly predictive of sociopolitical efficacy for minority youth. The findings are discussed in regard to previous research on open classroom climate and youth critical consciousness; and implications for future research and educational practice are drawn.
NASA Astrophysics Data System (ADS)
Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo
2015-12-01
The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.
Critical older driver errors in a national sample of serious U.S. crashes.
Cicchino, Jessica B; McCartt, Anne T
2015-07-01
Older drivers are at increased risk of crash involvement per mile traveled. The purpose of this study was to examine older driver errors in serious crashes to determine which errors are most prevalent. The National Highway Traffic Safety Administration's National Motor Vehicle Crash Causation Survey collected in-depth, on-scene data for a nationally representative sample of 5470 U.S. police-reported passenger vehicle crashes during 2005-2007 for which emergency medical services were dispatched. There were 620 crashes involving 647 drivers aged 70 and older, representing 250,504 crash-involved older drivers. The proportion of various critical errors made by drivers aged 70 and older were compared with those made by drivers aged 35-54. Driver error was the critical reason for 97% of crashes involving older drivers. Among older drivers who made critical errors, the most common were inadequate surveillance (33%) and misjudgment of the length of a gap between vehicles or of another vehicle's speed, illegal maneuvers, medical events, and daydreaming (6% each). Inadequate surveillance (33% vs. 22%) and gap or speed misjudgment errors (6% vs. 3%) were more prevalent among older drivers than middle-aged drivers. Seventy-one percent of older drivers' inadequate surveillance errors were due to looking and not seeing another vehicle or failing to see a traffic control rather than failing to look, compared with 40% of inadequate surveillance errors among middle-aged drivers. About two-thirds (66%) of older drivers' inadequate surveillance errors and 77% of their gap or speed misjudgment errors were made when turning left at intersections. When older drivers traveled off the edge of the road or traveled over the lane line, this was most commonly due to non-performance errors such as medical events (51% and 44%, respectively), whereas middle-aged drivers were involved in these crash types for other reasons. Gap or speed misjudgment errors and inadequate surveillance errors were significantly more prevalent among female older drivers than among female middle-aged drivers, but the prevalence of these errors did not differ significantly between older and middle-aged male drivers. These errors comprised 51% of errors among older female drivers but only 31% among older male drivers. Efforts to reduce older driver crash involvements should focus on diminishing the likelihood of the most common driver errors. Countermeasures that simplify or remove the need to make left turns across traffic such as roundabouts, protected left turn signals, and diverging diamond intersection designs could decrease the frequency of inadequate surveillance and gap or speed misjudgment errors. In the future, vehicle-to-vehicle and vehicle-to-infrastructure communications may also help protect older drivers from these errors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jones, Tamara Bertrand; Guthrie, Kathy L; Osteen, Laura
2016-12-01
This chapter introduces the critical domains of culturally relevant leadership learning. The model explores how capacity, identity, and efficacy of student leaders interact with dimensions of campus climate. © 2016 Wiley Periodicals, Inc., A Wiley Company.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Situating Student Errors: Linguistic-to-Algebra Translation Errors
ERIC Educational Resources Information Center
Adu-Gyamfi, Kwaku; Bossé, Michael J.; Chandler, Kayla
2015-01-01
While it is well recognized that students are prone to difficulties when performing linguistic-to-algebra translations, the nature of students' difficulties remain an issue of contention. Moreover, the literature indicates that these difficulties are not easily remediated by domain-specific instruction. Some have opined that this is the case…
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, Louise Marie
2007-09-01
A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This reportmore » reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.« less
Ammari, Maha Al; Sultana, Khizra; Yunus, Faisal; Ghobain, Mohammed Al; Halwan, Shatha M. Al
2016-01-01
Objectives: To assess the proportion of critical errors committed while demonstrating the inhaler technique in hospitalized patients diagnosed with asthma and chronic obstructive pulmonary disease (COPD). Methods: This cross-sectional observational study was conducted in 47 asthmatic and COPD patients using inhaler devices. The study took place at King Abdulaziz Medical City, Riyadh, Saudi Arabia between September and December 2013. Two pharmacists independently assessed inhaler technique with a validated checklist. Results: Seventy percent of patients made at least one critical error while demonstrating their inhaler technique, and the mean number of critical errors per patient was 1.6. Most patients used metered dose inhaler (MDI), and 73% of MDI users and 92% of dry powder inhaler users committed at least one critical error. Conclusion: Inhaler technique in hospitalized Saudi patients was inadequate. Health care professionals should understand the importance of reassessing and educating patients on a regular basis for inhaler technique, recommend the use of a spacer when needed, and regularly assess and update their own inhaler technique skills. PMID:27146622
Samsuri, Srima Elina; Pei Lin, Lua; Fahrni, Mathumalar Loganathan
2015-01-01
Objective To assess the safety attitudes of pharmacists, provide a profile of their domains of safety attitude and correlate their attitudes with self-reported rates of medication errors. Design A cross-sectional study utilising the Safety Attitudes Questionnaire (SAQ). Setting 3 public hospitals and 27 health clinics. Participants 117 pharmacists. Main outcome measure(s) Safety culture mean scores, variation in scores across working units and between hospitals versus health clinics, predictors of safety culture, and medication errors and their correlation. Results Response rate was 83.6% (117 valid questionnaires returned). Stress recognition (73.0±20.4) and working condition (54.8±17.4) received the highest and lowest mean scores, respectively. Pharmacists exhibited positive attitudes towards: stress recognition (58.1%), job satisfaction (46.2%), teamwork climate (38.5%), safety climate (33.3%), perception of management (29.9%) and working condition (15.4%). With the exception of stress recognition, those who worked in health clinics scored higher than those in hospitals (p<0.05) and higher scores (overall score as well as score for each domain except for stress recognition) correlated negatively with reported number of medication errors. Conversely, those working in hospital (versus health clinic) were 8.9 times more likely (p<0.01) to report a medication error (OR 8.9, CI 3.08 to 25.7). As stress recognition increased, the number of medication errors reported increased (p=0.023). Years of work experience (p=0.017) influenced the number of medication errors reported. For every additional year of work experience, pharmacists were 0.87 times less likely to report a medication error (OR 0.87, CI 0.78 to 0.98). Conclusions A minority (20.5%) of the pharmacists working in hospitals and health clinics was in agreement with the overall SAQ questions and scales. Pharmacists in outpatient and ambulatory units and those in health clinics had better perceptions of safety culture. As perceptions improved, the number of medication errors reported decreased. Group-specific interventions that target specific domains are necessary to improve the safety culture. PMID:26610761
Usability of a CKD educational website targeted to patients and their family members.
Diamantidis, Clarissa J; Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C
2012-10-01
Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10-44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet.
Ahcyl2 upregulates NBCe1-B via multiple serine residues of the PEST domain-mediated association.
Park, Pil Whan; Ahn, Jeong Yeal; Yang, Dongki
2016-07-01
Inositol-1,4,5-triphosphate [IP3] receptors binding protein released with IP3 (IRBIT) was previously reported as an activator of NBCe1-B. Recent studies have characterized IRBIT homologue S-Adenosylhomocysteine hydrolase-like 2 (AHCYL2). AHCYL2 is highly homologous to IRBIT (88%) and heteromerizes with IRBIT. The two important domains in the N-terminus of AHCYL2 are a PEST domain and a coiled-coil domain which are highly comparable to those in IRBIT. Therefore, in this study, we tried to identify the role of those domains in mouse AHCYL2 (Ahcyl2), and we succeeded in identifying PEST domain of Ahcyl2 as a regulation region for NBCe1-B activity. Site directed mutagenesis and coimmunoprecipitation assay showed that NBCe1-B binds to the N-terminal Ahcyl2-PEST domain, and its binding is determined by the phosphorylation of 4 critical serine residues (Ser151, Ser154, Ser157, and Ser160) in Ahcyl2 PEST domain. Also we revealed that 4 critical serine residues in Ahcyl2 PEST domain are indispensable for the activation of NBCe1-B using measurement of intracellular pH experiment. Thus, these results suggested that the NBCe1-B is interacted with 4 critical serine residues in Ahcyl2 PEST domain, which play an important role in intracellular pH regulation through NBCe1-B.
Ke, Ying; Hunter, Mark J.; Ng, Chai Ann; Perry, Matthew D.; Vandenberg, Jamie I.
2014-01-01
The N-terminal cytoplasmic region of the Kv11.1a potassium channel contains a Per-Arnt-Sim (PAS) domain that is essential for the unique slow deactivation gating kinetics of the channel. The PAS domain has also been implicated in the assembly and stabilization of the assembled tetrameric channel, with many clinical mutants in the PAS domain resulting in reduced stability of the domain and reduced trafficking. Here, we use quantitative Western blotting to show that the PAS domain is not required for normal channel trafficking nor for subunit-subunit interactions, and it is not necessary for stabilizing assembled channels. However, when the PAS domain is present, the N-Cap amphipathic helix must also be present for channels to traffic to the cell membrane. Serine scan mutagenesis of the N-Cap amphipathic helix identified Leu-15, Ile-18, and Ile-19 as residues critical for the stabilization of full-length proteins when the PAS domain is present. Furthermore, mutant cycle analysis experiments support recent crystallography studies, indicating that the hydrophobic face of the N-Cap amphipathic helix interacts with a surface-exposed hydrophobic patch on the core of the PAS domain to stabilize the structure of this critical gating domain. Our data demonstrate that the N-Cap amphipathic helix is critical for channel stability and trafficking. PMID:24695734
Prevalence of teen driver errors leading to serious motor vehicle crashes.
Curry, Allison E; Hafetz, Jessica; Kallan, Michael J; Winston, Flaura K; Durbin, Dennis R
2011-07-01
Motor vehicle crashes are the leading cause of adolescent deaths. Programs and policies should target the most common and modifiable reasons for crashes. We estimated the frequency of critical reasons for crashes involving teen drivers, and examined in more depth specific teen driver errors. The National Highway Traffic Safety Administration's (NHTSA) National Motor Vehicle Crash Causation Survey collected data at the scene of a nationally representative sample of 5470 serious crashes between 7/05 and 12/07. NHTSA researchers assigned a single driver, vehicle, or environmental factor as the critical reason for the event immediately leading to each crash. We analyzed crashes involving 15-18 year old drivers. 822 teen drivers were involved in 795 serious crashes, representing 335,667 teens in 325,291 crashes. Driver error was by far the most common reason for crashes (95.6%), as opposed to vehicle or environmental factors. Among crashes with a driver error, a teen made the error 79.3% of the time (75.8% of all teen-involved crashes). Recognition errors (e.g., inadequate surveillance, distraction) accounted for 46.3% of all teen errors, followed by decision errors (e.g., following too closely, too fast for conditions) (40.1%) and performance errors (e.g., loss of control) (8.0%). Inadequate surveillance, driving too fast for conditions, and distracted driving together accounted for almost half of all crashes. Aggressive driving behavior, drowsy driving, and physical impairments were less commonly cited as critical reasons. Males and females had similar proportions of broadly classified errors, although females were specifically more likely to make inadequate surveillance errors. Our findings support prioritization of interventions targeting driver distraction and surveillance and hazard awareness training. Copyright © 2010 Elsevier Ltd. All rights reserved.
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
USDA-ARS?s Scientific Manuscript database
Spatial frequency domain imaging technique has recently been developed for determination of the optical properties of food and biological materials. However, accurate estimation of the optical property parameters by the technique is challenging due to measurement errors associated with signal acquis...
From Here to There: Lessons from an Integrative Patient Safety Project in Rural Health Care Settings
2005-05-01
errors and patient falls. The medication errors generally involved one of three issues: incorrect dose, time, or port. Although most of the health...statistics about trends; and the summary of events related to patient safety and medical errors.12 The interplay among factors These three domains...the medical staff. We explored these issues further when administering a staff-wide Patient Safety Survey. Responses mirrored the findings that
Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang
2018-03-01
We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Deng, Hongping; Mayer, Lucio; Meru, Farzana
2017-09-01
We carry out simulations of gravitationally unstable disks using smoothed particle hydrodynamics (SPH) and the novel Lagrangian meshless finite mass (MFM) scheme in the GIZMO code. Our aim is to understand the cause of the nonconvergence of the cooling boundary for fragmentation reported in the literature. We run SPH simulations with two different artificial viscosity implementations and compare them with MFM, which does not employ any artificial viscosity. With MFM we demonstrate convergence of the critical cooling timescale for fragmentation at {β }{crit}≈ 3. Nonconvergence persists in SPH codes. We show how the nonconvergence problem is caused by artificial fragmentation triggered by excessive dissipation of angular momentum in domains with large velocity derivatives. With increased resolution, such domains become more prominent. Vorticity lags behind density, due to numerical viscous dissipation in these regions, promoting collapse with longer cooling times. Such effect is shown to be dominant over the competing tendency of artificial viscosity to diminish with increasing resolution. When the initial conditions are first relaxed for several orbits, the flow is more regular, with lower shear and vorticity in nonaxisymmetric regions, aiding convergence. Yet MFM is the only method that converges exactly. Our findings are of general interest, as numerical dissipation via artificial viscosity or advection errors can also occur in grid-based codes. Indeed, for the FARGO code values of {β }{crit} significantly higher than our converged estimate have been reported in the literature. Finally, we discuss implications for giant planet formation via disk instability.
FAST TRACK COMMUNICATION Critical exponents of domain walls in the two-dimensional Potts model
NASA Astrophysics Data System (ADS)
Dubail, Jérôme; Lykke Jacobsen, Jesper; Saleur, Hubert
2010-12-01
We address the geometrical critical behavior of the two-dimensional Q-state Potts model in terms of the spin clusters (i.e. connected domains where the spin takes a constant value). These clusters are different from the usual Fortuin-Kasteleyn clusters, and are separated by domain walls that can cross and branch. We develop a transfer matrix technique enabling the formulation and numerical study of spin clusters even when Q is not an integer. We further identify geometrically the crossing events which give rise to conformal correlation functions. This leads to an infinite series of fundamental critical exponents h_{\\ell _1-\\ell _2,2\\ell _1}, valid for 0 <= Q <= 4, that describe the insertion of ell1 thin and ell2 thick domain walls.
NASA Technical Reports Server (NTRS)
Kumar, Anil; Done, James; Dudhia, Jimy; Niyogi, Dev
2011-01-01
The predictability of Cyclone Sidr in the Bay of Bengal was explored in terms of track and intensity using the Advanced Research Hurricane Weather Research Forecast (AHW) model. This constitutes the first application of the AHW over an area that lies outside the region of the North Atlantic for which this model was developed and tested. Several experiments were conducted to understand the possible contributing factors that affected Sidr s intensity and track simulation by varying the initial start time and domain size. Results show that Sidr s track was strongly controlled by the synoptic flow at the 500-hPa level, seen especially due to the strong mid-latitude westerly over north-central India. A 96-h forecast produced westerly winds over north-central India at the 500-hPa level that were notably weaker; this likely caused the modeled cyclone track to drift from the observed actual track. Reducing the model domain size reduced model error in the synoptic-scale winds at 500 hPa and produced an improved cyclone track. Specifically, the cyclone track appeared to be sensitive to the upstream synoptic flow, and was, therefore, sensitive to the location of the western boundary of the domain. However, cyclone intensity remained largely unaffected by this synoptic wind error at the 500-hPa level. Comparison of the high resolution, moving nested domain with a single coarser resolution domain showed little difference in tracks, but resulted in significantly different intensities. Experiments on the domain size with regard to the total precipitation simulated by the model showed that precipitation patterns and 10-m surface winds were also different. This was mainly due to the mid-latitude westerly flow across the west side of the model domain. The analysis also suggested that the total precipitation pattern and track was unchanged when the domain was extended toward the east, north, and south. Furthermore, this highlights our conclusion that Sidr was influenced from the west side of the domain. The displacement error was significantly reduced after the domain size from the western model boundary was decreased. Study results demonstrate the capability and need of a high-resolution mesoscale modeling framework for simulating the complex interactions that contribute to the formation of tropical cyclones over the Bay of Bengal region
Bian, Xu; Zhang, Yu; Li, Yibo; Gong, Xiaoyue; Jin, Shijiu
2015-01-01
This paper proposes a time-space domain correlation-based method for gas leakage detection and location. It acquires the propagated signal on the skin of the plate by using a piezoelectric acoustic emission (AE) sensor array. The signal generated from the gas leakage hole (which diameter is less than 2 mm) is time continuous. By collecting and analyzing signals from different sensors’ positions in the array, the correlation among those signals in the time-space domain can be achieved. Then, the directional relationship between the sensor array and the leakage source can be calculated. The method successfully solves the real-time orientation problem of continuous ultrasonic signals generated from leakage sources (the orientation time is about 15 s once), and acquires high accuracy location information of leakage sources by the combination of multiple sets of orientation results. According to the experimental results, the mean value of the location absolute error is 5.83 mm on a one square meter plate, and the maximum location error is generally within a ±10 mm interval. Meanwhile, the error variance is less than 20.17. PMID:25860070
Bian, Xu; Zhang, Yu; Li, Yibo; Gong, Xiaoyue; Jin, Shijiu
2015-04-09
This paper proposes a time-space domain correlation-based method for gas leakage detection and location. It acquires the propagated signal on the skin of the plate by using a piezoelectric acoustic emission (AE) sensor array. The signal generated from the gas leakage hole (which diameter is less than 2 mm) is time continuous. By collecting and analyzing signals from different sensors' positions in the array, the correlation among those signals in the time-space domain can be achieved. Then, the directional relationship between the sensor array and the leakage source can be calculated. The method successfully solves the real-time orientation problem of continuous ultrasonic signals generated from leakage sources (the orientation time is about 15 s once), and acquires high accuracy location information of leakage sources by the combination of multiple sets of orientation results. According to the experimental results, the mean value of the location absolute error is 5.83 mm on a one square meter plate, and the maximum location error is generally within a ±10 mm interval. Meanwhile, the error variance is less than 20.17.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.
Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C
2014-06-01
Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.
Shame, guilt, and the medical learner: ignored connections and why we should care.
Bynum, William E; Goodie, Jeffrey L
2014-11-01
Shame and guilt are subjective emotional responses that occur in response to negative events such as the making of mistakes or an experience of mistreatment, and have been studied extensively in the field of psychology. Despite their potentially damaging effects and ubiquitous presence in everyday life, very little has been written about the impact of shame and guilt in medical education. The authors reference the psychology literature to define shame and guilt and then focus on one area in medical education in which they manifest: the response of the learner and teacher to medical errors. Evidence is provided from the psychology literature to show associations between shame and negative coping mechanisms, decreased empathy and impaired self-forgiveness following a transgression. The authors link this evidence to existing findings in the medical literature that may be related to unrecognised shame and guilt, and propose novel ways of thinking about a learner's ability to cope, remain empathetic and forgive him or herself following an error. The authors combine the discussion of shame, guilt and learner error with findings from the medical education literature and outline three specific ways in which teachers might lead learners to a shame-free response to errors: by acknowledging the presence of shame and guilt in the learner; by avoiding humiliation, and by leveraging effective feedback. The authors conclude with recommendations for research on shame and guilt and their influence on the experience of the medical learner. This critical research plus enhanced recognition of shame and guilt will allow teachers and institutions to further cultivate the engaged, empathetic and shame-resilient learners they strive to create. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Jiang, Zhigang; Chang, Jitao; Wang, Fang; Yu, Li
2015-02-01
Clostridium perfringens epsilon toxin (Etx) is an extremely potent toxin, causing fatal enterotoxaemia in many animals. Several amino acids in domains I and II have been proposed to be critical for Etx to interact with MDCK cells. However, the critical amino acids in domain III remain undefined. Therefore, we assessed the effects of aromatic amino acids in domain III on Etx activity in this study. All of the results indicated that Y71 was critical for the cytotoxic activity of Etx towards MDCK cells, and this activity was dependent on the existence of an aromatic ring residue in position 71. Additionally, mutations in Y71 did not affect the binding of Etx to MDCK cells, indicating that Y71 is not a receptor binding site for Etx. In summary, we identified an amino acid in domain III that is important for the cytotoxic activity of Etx, thereby providing information on the structure-function relationship of Etx.
A Review of Depth and Normal Fusion Algorithms
Štolc, Svorad; Pock, Thomas
2018-01-01
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903
An Improved Neutron Transport Algorithm for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.
2010-01-01
Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Latent error detection: A golden two hours for detection.
Saward, Justin R E; Stanton, Neville A
2017-03-01
Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
NASA Astrophysics Data System (ADS)
Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.
Zaghloul, Mohamed A. S.; Wang, Mohan; Milione, Giovanni; Li, Ming-Jun; Li, Shenping; Huang, Yue-Kai; Wang, Ting; Chen, Kevin P.
2018-01-01
Brillouin optical time domain analysis is the sensing of temperature and strain changes along an optical fiber by measuring the frequency shift changes of Brillouin backscattering. Because frequency shift changes are a linear combination of temperature and strain changes, their discrimination is a challenge. Here, a multicore optical fiber that has two cores is fabricated. The differences between the cores’ temperature and strain coefficients are such that temperature (strain) changes can be discriminated with error amplification factors of 4.57 °C/MHz (69.11 μϵ/MHz), which is 2.63 (3.67) times lower than previously demonstrated. As proof of principle, using the multicore optical fiber and a commercial Brillouin optical time domain analyzer, the temperature (strain) changes of a thermally expanding metal cylinder are discriminated with an error of 0.24% (3.7%). PMID:29649148
Zaghloul, Mohamed A S; Wang, Mohan; Milione, Giovanni; Li, Ming-Jun; Li, Shenping; Huang, Yue-Kai; Wang, Ting; Chen, Kevin P
2018-04-12
Brillouin optical time domain analysis is the sensing of temperature and strain changes along an optical fiber by measuring the frequency shift changes of Brillouin backscattering. Because frequency shift changes are a linear combination of temperature and strain changes, their discrimination is a challenge. Here, a multicore optical fiber that has two cores is fabricated. The differences between the cores' temperature and strain coefficients are such that temperature (strain) changes can be discriminated with error amplification factors of 4.57 °C/MHz (69.11 μ ϵ /MHz), which is 2.63 (3.67) times lower than previously demonstrated. As proof of principle, using the multicore optical fiber and a commercial Brillouin optical time domain analyzer, the temperature (strain) changes of a thermally expanding metal cylinder are discriminated with an error of 0.24% (3.7%).
NASA Astrophysics Data System (ADS)
Güttler, I.
2012-04-01
Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.
Advanced Microwave Radiometer (AMR) for SWOT mission
NASA Astrophysics Data System (ADS)
Chae, C. S.
2015-12-01
The objective of the SWOT (Surface Water & Ocean Topography) satellite mission is to measure wide-swath, high resolution ocean topography and terrestrial surface waters. Since main payload radar will use interferometric SAR technology, conventional microwave radiometer system which has single nadir look antenna beam (i.e., OSTM/Jason-2 AMR) is not ideally applicable for the mission for wet tropospheric delay correction. Therefore, SWOT AMR incorporates two antenna beams along cross track direction. In addition to the cross track design of the AMR radiometer, wet tropospheric error requirement is expressed in space frequency domain (in the sense of cy/km), in other words, power spectral density (PSD). Thus, instrument error allocation and design are being done in PSD which are not conventional approaches for microwave radiometer requirement allocation and design. A few of novel analyses include: 1. The effects of antenna beam size to PSD error and land/ocean contamination, 2. Receiver error allocation and the contributions of radiometric count averaging, NEDT, Gain variation, etc. 3. Effect of thermal design in the frequency domain. In the presentation, detailed AMR design and analyses results will be discussed.
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Uhart, Marina; Flores, Gabriel; Bustos, Diego M.
2016-01-01
Posttranslational regulation of protein function is an ubiquitous mechanism in eukaryotic cells. Here, we analyzed biological properties of nodes and edges of a human protein-protein interaction phosphorylation-based network, especially of those nodes critical for the network controllability. We found that the minimal number of critical nodes needed to control the whole network is 29%, which is considerably lower compared to other real networks. These critical nodes are more regulated by posttranslational modifications and contain more binding domains to these modifications than other kinds of nodes in the network, suggesting an intra-group fast regulation. Also, when we analyzed the edges characteristics that connect critical and non-critical nodes, we found that the former are enriched in domain-to-eukaryotic linear motif interactions, whereas the later are enriched in domain-domain interactions. Our findings suggest a possible structure for protein-protein interaction networks with a densely interconnected and self-regulated central core, composed of critical nodes with a high participation in the controllability of the full network, and less regulated peripheral nodes. Our study offers a deeper understanding of complex network control and bridges the controllability theorems for complex networks and biological protein-protein interaction phosphorylation-based networked systems. PMID:27195976
Four Critical Domains of Accountability for School Counselors
ERIC Educational Resources Information Center
Bemak, Fred; Willians, Joseph M.; Chung, Rita Chi-Ying
2015-01-01
Despite recognition of accountability for school counselors, no clear set of interrelated performance measures exists to guide school counselors in collecting and evaluating data that relates to student academic success. This article outlines four critical domains of accountability for school counselors (i.e., grades, attendance, disciplinary…
Deterministic Modeling of the High Temperature Test Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less
2013-09-01
M.4.1. Two-dimensional domains cropped out of three-dimensional numerically generated realizations; (a) 3D PCE-NAPL realizations generated by UTCHEM...165 Figure R.3.2. The absolute error vs relative error scatter plots of pM and gM from SGS data set- 4 using multi-task manifold...error scatter plots of pM and gM from TP/MC data set using multi- task manifold regression
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains
NASA Technical Reports Server (NTRS)
Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang
2013-01-01
Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.
Turbulence excited frequency domain damping measurement and truncation effects
NASA Technical Reports Server (NTRS)
Soovere, J.
1976-01-01
Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.
Impaired cognitive plasticity and goal-directed control in adolescent obsessive-compulsive disorder.
Gottwald, Julia; de Wit, Sanne; Apergis-Schoute, Annemieke M; Morein-Zamir, Sharon; Kaser, Muzaffer; Cormack, Francesca; Sule, Akeem; Limmer, Winifred; Morris, Anna Conway; Robbins, Trevor W; Sahakian, Barbara J
2018-01-22
Youths with obsessive-compulsive disorder (OCD) experience severe distress and impaired functioning at school and at home. Critical cognitive domains for daily functioning and academic success are learning, memory, cognitive flexibility and goal-directed behavioural control. Performance in these important domains among teenagers with OCD was therefore investigated in this study. A total of 36 youths with OCD and 36 healthy comparison subjects completed two memory tasks: Pattern Recognition Memory (PRM) and Paired Associates Learning (PAL); as well as the Intra-Extra Dimensional Set Shift (IED) task to quantitatively gauge learning as well as cognitive flexibility. A subset of 30 participants of each group also completed a Differential-Outcome Effect (DOE) task followed by a Slips-of-Action Task, designed to assess the balance of goal-directed and habitual behavioural control. Adolescent OCD patients showed a significant learning and memory impairment. Compared with healthy comparison subjects, they made more errors on PRM and PAL and in the first stages of IED involving discrimination and reversal learning. Patients were also slower to learn about contingencies in the DOE task and were less sensitive to outcome devaluation, suggesting an impairment in goal-directed control. This study advances the characterization of juvenile OCD. Patients demonstrated impairments in all learning and memory tasks. We also provide the first experimental evidence of impaired goal-directed control and lack of cognitive plasticity early in the development of OCD. The extent to which the impairments in these cognitive domains impact academic performance and symptom development warrants further investigation.
Moeller, Andrew; Webber, Jordan; Epstein, Ian
2016-07-13
Resident duty hours have recently been under criticism, with concerns for resident and patient well-being. Historically, call shifts have been long, and some residency training programs have now restricted shift lengths. Data and opinions about the effects of such restrictions are conflicting. The Internal Medicine Residency Program at Dalhousie University recently moved from a traditional call structure to a day float/night float system. This study evaluated how this change in duty hours affected resident perceptions in several key domains. Senior residents from an internal medicine training program in Canada responded to an anonymous online survey immediately before and 6 months after the implementation of duty hour reform. The survey contained questions relating to three major domains: resident wellness, ability to deliver quality health care, and medical education experience. Mean pre- and post-intervention scores were compared using the t-test for paired samples. Twenty-three of 27 (85 %) senior residents completed both pre- and post-reform surveys. Residents perceived significant changes in many domains with duty hour reform. These included improved general wellness, less exposure to personal harm, fewer feelings of isolation, less potential for error, improvement in clinical skills expertise, increased work efficiency, more successful teaching, increased proficiency in medical skills, more successful learning, and fewer rotation disruptions. Senior residents in a Canadian internal medicine training program perceived significant benefits in medical education experience, ability to deliver healthcare, and resident wellness after implementation of duty hour reform.
He, Chengbing; Xi, Rui; Wang, Han; Jing, Lianyou; Shi, Wentao; Zhang, Qunfei
2017-01-01
Phase-coherent underwater acoustic (UWA) communication systems typically employ multiple hydrophones in the receiver to achieve spatial diversity gain. However, small underwater platforms can only carry a single transducer which can not provide spatial diversity gain. In this paper, we propose single-carrier with frequency domain equalization (SC-FDE) for phase-coherent synthetic aperture acoustic communications in which a virtual array is generated by the relative motion between the transmitter and the receiver. This paper presents synthetic aperture acoustic communication results using SC-FDE through data collected during a lake experiment in January 2016. The performance of two receiver algorithms is analyzed and compared, including the frequency domain equalizer (FDE) and the hybrid time frequency domain equalizer (HTFDE). The distances between the transmitter and the receiver in the experiment were about 5 km. The bit error rate (BER) and output signal-to-noise ratio (SNR) performances with different receiver elements and transmission numbers were presented. After combining multiple transmissions, error-free reception using a convolution code with a data rate of 8 kbps was demonstrated. PMID:28684683
A time-space domain stereo finite difference method for 3D scalar wave propagation
NASA Astrophysics Data System (ADS)
Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie
2016-11-01
The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).
Metrology for terahertz time-domain spectrometers
NASA Astrophysics Data System (ADS)
Molloy, John F.; Naftaly, Mira
2015-12-01
In recent years the terahertz time-domain spectrometer (THz TDS) [1] has emerged as a key measurement device for spectroscopic investigations in the frequency range of 0.1-5 THz. To date, almost every type of material has been studied using THz TDS, including semiconductors, ceramics, polymers, metal films, liquid crystals, glasses, pharmaceuticals, DNA molecules, proteins, gases, composites, foams, oils, and many others. Measurements with a TDS are made in the time domain; conversion from the time domain data to a frequency spectrum is achieved by applying the Fourier Transform, calculated numerically using the Fast Fourier Transform (FFT) algorithm. As in many other types of spectrometer, THz TDS requires that the sample data be referenced to similarly acquired data with no sample present. Unlike frequency-domain spectrometers which detect light intensity and measure absorption spectra, a TDS records both amplitude and phase information, and therefore yields both the absorption coefficient and the refractive index of the sample material. The analysis of the data from THz TDS relies on the assumptions that: a) the frequency scale is accurate; b) the measurement of THz field amplitude is linear; and c) that the presence of the sample does not affect the performance characteristics of the instrument. The frequency scale of a THz TDS is derived from the displacement of the delay line; via FFT, positioning errors may give rise to frequency errors that are difficult to quantify. The measurement of the field amplitude in a THz TDS is required to be linear with a dynamic range of the order of 10 000. And attention must be given to the sample positioning and handling in order to avoid sample-related errors.
Usability of a CKD Educational Website Targeted to Patients and Their Family Members
Zuckerman, Marni; Fink, Wanda; Hu, Peter; Yang, Shiming; Fink, Jeffrey C.
2012-01-01
Summary Background and objectives Web-based technology is critical to the future of healthcare. As part of the Safe Kidney Care cohort study evaluating patient safety in CKD, this study determined how effectively a representative sample of patients with CKD or family members could interpret and use the Safe Kidney Care website (www.safekidneycare.org), an informational website on safety in CKD. Design, setting, participants, & measurements Between November of 2011 and January of 2012, persons with CKD or their family members underwent formal usability testing administered by a single interviewer with a second recording observer. Each participant was independently provided a list of 21 tasks to complete, with each task rated as either easily completed/noncritical error or critical error (user cannot complete the task without significant interviewer intervention). Results Twelve participants completed formal usability testing. Median completion time for all tasks was 17.5 minutes (range=10–44 minutes). In total, 10 participants had greater than or equal to one critical error. There were 55 critical errors in 252 tasks (22%), with the highest proportion of critical errors occurring when participants were asked to find information on treatments that may damage kidneys, find the website on the internet, increase font size, and scroll to the bottom of the webpage. Participants were generally satisfied with the content and usability of the website. Conclusions Web-based educational materials for patients with CKD should target a wide range of computer literacy levels and anticipate variability in competency in use of the computer and internet. PMID:22798537
O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B
2018-01-01
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.
NASA Astrophysics Data System (ADS)
Wang, Hua; Tao, Guo; Shang, Xue-Feng; Fang, Xin-Ding; Burns, Daniel R.
2013-12-01
In acoustic logging-while-drilling (ALWD) finite difference in time domain (FDTD) simulations, large drill collar occupies, most of the fluid-filled borehole and divides the borehole fluid into two thin fluid columns (radius ˜27 mm). Fine grids and large computational models are required to model the thin fluid region between the tool and the formation. As a result, small time step and more iterations are needed, which increases the cumulative numerical error. Furthermore, due to high impedance contrast between the drill collar and fluid in the borehole (the difference is >30 times), the stability and efficiency of the perfectly matched layer (PML) scheme is critical to simulate complicated wave modes accurately. In this paper, we compared four different PML implementations in a staggered grid finite difference in time domain (FDTD) in the ALWD simulation, including field-splitting PML (SPML), multiaxial PML(MPML), non-splitting PML (NPML), and complex frequency-shifted PML (CFS-PML). The comparison indicated that NPML and CFS-PML can absorb the guided wave reflection from the computational boundaries more efficiently than SPML and M-PML. For large simulation time, SPML, M-PML, and NPML are numerically unstable. However, the stability of M-PML can be improved further to some extent. Based on the analysis, we proposed that the CFS-PML method is used in FDTD to eliminate the numerical instability and to improve the efficiency of absorption in the PML layers for LWD modeling. The optimal values of CFS-PML parameters in the LWD simulation were investigated based on thousands of 3D simulations. For typical LWD cases, the best maximum value of the quadratic damping profile was obtained using one d 0. The optimal parameter space for the maximum value of the linear frequency-shifted factor ( α 0) and the scaling factor ( β 0) depended on the thickness of the PML layer. For typical formations, if the PML thickness is 10 grid points, the global error can be reduced to <1% using the optimal PML parameters, and the error will decrease as the PML thickness increases.
van der Palen, Job; Thomas, Mike; Chrystyn, Henry; Sharma, Raj K; van der Valk, Paul Dlpm; Goosens, Martijn; Wilkinson, Tom; Stonham, Carol; Chauhan, Anoop J; Imber, Varsha; Zhu, Chang-Qing; Svedsater, Henrik; Barnes, Neil C
2016-11-24
Errors in the use of different inhalers were investigated in patients naive to the devices under investigation in a multicentre, single-visit, randomised, open-label, cross-over study. Patients with chronic obstructive pulmonary disease (COPD) or asthma were assigned to ELLIPTA vs DISKUS (Accuhaler), metered-dose inhaler (MDI) or Turbuhaler. Patients with COPD were also assigned to ELLIPTA vs Handihaler or Breezhaler. Patients demonstrated inhaler use after reading the patient information leaflet (PIL). A trained investigator assessed critical errors (i.e., those likely to result in the inhalation of significantly reduced, minimal or no medication). If the patient made errors, the investigator demonstrated the correct use of the inhaler, and the patient demonstrated inhaler use again. Fewer COPD patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS, 9/171 (5%) vs 75/171 (44%); MDI, 10/80 (13%) vs 48/80 (60%); Turbuhaler, 8/100 (8%) vs 44/100 (44%); Handihaler, 17/118 (14%) vs 57/118 (48%); Breezhaler, 13/98 (13%) vs 45/98 (46%; all P<0.001). Most patients (57-70%) made no errors using ELLIPTA and did not require investigator instruction. Instruction was required for DISKUS (65%), MDI (85%), Turbuhaler (71%), Handihaler (62%) and Breezhaler (56%). Fewer asthma patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS (3/70 (4%) vs 9/70 (13%), P=0.221); MDI (2/32 (6%) vs 8/32 (25%), P=0.074) and significantly fewer vs Turbuhaler (3/60 (5%) vs 20/60 (33%), P<0.001). More asthma and COPD patients preferred ELLIPTA over the other devices (all P⩽0.002). Significantly, fewer COPD patients using ELLIPTA made critical errors after reading the PIL vs other inhalers. More asthma and COPD patients preferred ELLIPTA over comparator inhalers.
van der Palen, Job; Thomas, Mike; Chrystyn, Henry; Sharma, Raj K; van der Valk, Paul DLPM; Goosens, Martijn; Wilkinson, Tom; Stonham, Carol; Chauhan, Anoop J; Imber, Varsha; Zhu, Chang-Qing; Svedsater, Henrik; Barnes, Neil C
2016-01-01
Errors in the use of different inhalers were investigated in patients naive to the devices under investigation in a multicentre, single-visit, randomised, open-label, cross-over study. Patients with chronic obstructive pulmonary disease (COPD) or asthma were assigned to ELLIPTA vs DISKUS (Accuhaler), metered-dose inhaler (MDI) or Turbuhaler. Patients with COPD were also assigned to ELLIPTA vs Handihaler or Breezhaler. Patients demonstrated inhaler use after reading the patient information leaflet (PIL). A trained investigator assessed critical errors (i.e., those likely to result in the inhalation of significantly reduced, minimal or no medication). If the patient made errors, the investigator demonstrated the correct use of the inhaler, and the patient demonstrated inhaler use again. Fewer COPD patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS, 9/171 (5%) vs 75/171 (44%); MDI, 10/80 (13%) vs 48/80 (60%); Turbuhaler, 8/100 (8%) vs 44/100 (44%); Handihaler, 17/118 (14%) vs 57/118 (48%); Breezhaler, 13/98 (13%) vs 45/98 (46%; all P<0.001). Most patients (57–70%) made no errors using ELLIPTA and did not require investigator instruction. Instruction was required for DISKUS (65%), MDI (85%), Turbuhaler (71%), Handihaler (62%) and Breezhaler (56%). Fewer asthma patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS (3/70 (4%) vs 9/70 (13%), P=0.221); MDI (2/32 (6%) vs 8/32 (25%), P=0.074) and significantly fewer vs Turbuhaler (3/60 (5%) vs 20/60 (33%), P<0.001). More asthma and COPD patients preferred ELLIPTA over the other devices (all P⩽0.002). Significantly, fewer COPD patients using ELLIPTA made critical errors after reading the PIL vs other inhalers. More asthma and COPD patients preferred ELLIPTA over comparator inhalers. PMID:27883002
Strategy for long-term 3D cloud-resolving simulations over the ARM SGP site and preliminary results
NASA Astrophysics Data System (ADS)
Lin, W.; Liu, Y.; Song, H.; Endo, S.
2011-12-01
Parametric representations of cloud/precipitation processes continue having to be adopted in climate simulations with increasingly higher spatial resolution or with emerging adaptive mesh framework; and it is only becoming more critical that such parameterizations have to be scale aware. Continuous cloud measurements at DOE's ARM sites have provided a strong observational basis for novel cloud parameterization research at various scales. Despite significant progress in our observational ability, there are important cloud-scale physical and dynamical quantities that are either not currently observable or insufficiently sampled. To complement the long-term ARM measurements, we have explored an optimal strategy to carry out long-term 3-D cloud-resolving simulations over the ARM SGP site using Weather Research and Forecasting (WRF) model with multi-domain nesting. The factors that are considered to have important influences on the simulated cloud fields include domain size, spatial resolution, model top, forcing data set, model physics and the growth of model errors. The hydrometeor advection that may play a significant role in hydrological process within the observational domain but is often lacking, and the limitations due to the constraint of domain-wide uniform forcing in conventional cloud system-resolving model simulations, are at least partly accounted for in our approach. Conventional and probabilistic verification approaches are employed first for selected cases to optimize the model's capability of faithfully reproducing the observed mean and statistical distributions of cloud-scale quantities. This then forms the basis of our setup for long-term cloud-resolving simulations over the ARM SGP site. The model results will facilitate parameterization research, as well as understanding and dissecting parameterization deficiencies in climate models.
Syed, Salahuddin; Desler, Claus; Rasmussen, Lene J; Schmidt, Kristina H
2016-12-01
In response to replication stress cells activate the intra-S checkpoint, induce DNA repair pathways, increase nucleotide levels, and inhibit origin firing. Here, we report that Rrm3 associates with a subset of replication origins and controls DNA synthesis during replication stress. The N-terminal domain required for control of DNA synthesis maps to residues 186-212 that are also critical for binding Orc5 of the origin recognition complex. Deletion of this domain is lethal to cells lacking the replication checkpoint mediator Mrc1 and leads to mutations upon exposure to the replication stressor hydroxyurea. This novel Rrm3 function is independent of its established role as an ATPase/helicase in facilitating replication fork progression through polymerase blocking obstacles. Using quantitative mass spectrometry and genetic analyses, we find that the homologous recombination factor Rdh54 and Rad5-dependent error-free DNA damage bypass act as independent mechanisms on DNA lesions that arise when Rrm3 catalytic activity is disrupted whereas these mechanisms are dispensable for DNA damage tolerance when the replication function is disrupted, indicating that the DNA lesions generated by the loss of each Rrm3 function are distinct. Although both lesion types activate the DNA-damage checkpoint, we find that the resultant increase in nucleotide levels is not sufficient for continued DNA synthesis under replication stress. Together, our findings suggest a role of Rrm3, via its Orc5-binding domain, in restricting DNA synthesis that is genetically and physically separable from its established catalytic role in facilitating fork progression through replication blocks.
Syed, Salahuddin; Desler, Claus; Rasmussen, Lene J.; Schmidt, Kristina H.
2016-01-01
In response to replication stress cells activate the intra-S checkpoint, induce DNA repair pathways, increase nucleotide levels, and inhibit origin firing. Here, we report that Rrm3 associates with a subset of replication origins and controls DNA synthesis during replication stress. The N-terminal domain required for control of DNA synthesis maps to residues 186–212 that are also critical for binding Orc5 of the origin recognition complex. Deletion of this domain is lethal to cells lacking the replication checkpoint mediator Mrc1 and leads to mutations upon exposure to the replication stressor hydroxyurea. This novel Rrm3 function is independent of its established role as an ATPase/helicase in facilitating replication fork progression through polymerase blocking obstacles. Using quantitative mass spectrometry and genetic analyses, we find that the homologous recombination factor Rdh54 and Rad5-dependent error-free DNA damage bypass act as independent mechanisms on DNA lesions that arise when Rrm3 catalytic activity is disrupted whereas these mechanisms are dispensable for DNA damage tolerance when the replication function is disrupted, indicating that the DNA lesions generated by the loss of each Rrm3 function are distinct. Although both lesion types activate the DNA-damage checkpoint, we find that the resultant increase in nucleotide levels is not sufficient for continued DNA synthesis under replication stress. Together, our findings suggest a role of Rrm3, via its Orc5-binding domain, in restricting DNA synthesis that is genetically and physically separable from its established catalytic role in facilitating fork progression through replication blocks. PMID:27923055
Ferenc, Jaroslav; Červenák, Filip; Birčák, Erik; Juríková, Katarína; Goffová, Ivana; Gorilák, Peter; Huraiová, Barbora; Plavá, Jana; Demecsová, Loriana; Ďuríková, Nikola; Galisová, Veronika; Gazdarica, Matej; Puškár, Marek; Nagy, Tibor; Nagyová, Soňa; Mentelová, Lucia; Slaninová, Miroslava; Ševčovicová, Andrea; Tomáška, Ľubomír
2018-01-01
As future scientists, university students need to learn how to avoid making errors in their own manuscripts, as well as how to identify flaws in papers published by their peers. Here we describe a novel approach on how to promote students' ability to critically evaluate scientific articles. The exercise is based on instructing teams of students to write intentionally flawed manuscripts describing the results of simple experiments. The teams are supervised by instructors advising the students during manuscript writing, choosing the 'appropriate' errors, monitoring the identification of errors made by the other team and evaluating the strength of their arguments in support of the identified errors. We have compared the effectiveness of the method with a journal club-type seminar. Based on the results of our assessment we propose that the described seminar may effectively complement the existing approaches to teach critical scientific thinking. © 2017 by The International Union of Biochemistry and Molecular Biology, 46(1):22-30, 2018. © 2017 The International Union of Biochemistry and Molecular Biology.
Deetz, Carl O; Nolan, Debra K; Scott, Mitchell G
2012-01-01
A long-standing practice in clinical laboratories has been to automatically repeat laboratory tests when values trigger automated "repeat rules" in the laboratory information system such as a critical test result. We examined 25,553 repeated laboratory values for 30 common chemistry tests from December 1, 2010, to February 28, 2011, to determine whether this practice is necessary and whether it may be possible to reduce repeat testing to improve efficiency and turnaround time for reporting critical values. An "error" was defined to occur when the difference between the initial and verified values exceeded the College of American Pathologists/Clinical Laboratory Improvement Amendments allowable error limit. The initial values from 2.6% of all repeated tests (668) were errors. Of these 668 errors, only 102 occurred for values within the analytic measurement range. Median delays in reporting critical values owing to repeated testing ranged from 5 (blood gases) to 17 (glucose) minutes.
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.
2003-01-01
Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somayaji, Anil B.; Amai, Wendy A.; Walther, Eleanor A.
This reports describes the successful extension of artificial immune systems from the domain of computer security to the domain of real time control systems for robotic vehicles. A biologically-inspired computer immune system was added to the control system of two different mobile robots. As an additional layer in a multi-layered approach, the immune system is complementary to traditional error detection and error handling techniques. This can be thought of as biologically-inspired defense in depth. We demonstrated an immune system can be added with very little application developer effort, resulting in little to no performance impact. The methods described here aremore » extensible to any system that processes a sequence of data through a software interface.« less
Monitoring in Language Perception: Mild and Strong Conflicts Elicit Different ERP Patterns
ERIC Educational Resources Information Center
van de Meerendonk, Nan; Kolk, Herman H. J.; Vissers, Constance Th. W. M.; Chwilla, Dorothee J.
2010-01-01
In the language domain, most studies of error monitoring have been devoted to language production. However, in language perception, errors are made as well and we are able to detect them. According to the monitoring theory of language perception, a strong conflict between what is expected and what is observed triggers reanalysis to check for…
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Critical Thinking vs. Critical Consciousness
ERIC Educational Resources Information Center
Doughty, Howard A.
2006-01-01
This article explores four kinds of critical thinking. The first is found in Socratic dialogues, which employ critical thinking mainly to reveal logical fallacies in common opinions, thus cleansing superior minds of error and leaving philosophers free to contemplate universal verities. The second is critical interpretation (hermeneutics) which…
Frequency of pediatric medication administration errors and contributing factors.
Ozkan, Suzan; Kocaman, Gulseren; Ozturk, Candan; Seren, Seyda
2011-01-01
This study examined the frequency of pediatric medication administration errors and contributing factors. This research used the undisguised observation method and Critical Incident Technique. Errors and contributing factors were classified through the Organizational Accident Model. Errors were made in 36.5% of the 2344 doses that were observed. The most frequent errors were those associated with administration at the wrong time. According to the results of this study, errors arise from problems within the system.
Xu, Z N; Wang, S Y
2015-02-01
To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.
End-of-life care practices of critical care nurses: A national cross-sectional survey.
Ranse, Kristen; Yates, Patsy; Coyer, Fiona
2016-05-01
The critical care context presents important opportunities for nurses to deliver skilled, comprehensive care to patients at the end of life and their families. Limited research has identified the actual end-of-life care practices of critical care nurses. To identify the end-of-life care practices of critical care nurses. A national cross-sectional online survey. The survey was distributed to members of an Australian critical care nursing association and 392 critical care nurses (response rate 25%) completed the survey. Exploratory factor analysis using principal axis factoring with oblique rotation was undertaken on survey responses to identify the domains of end-of-life care practice. Descriptive statistics were calculated for individual survey items. Exploratory factor analysis identified six domains of end-of-life care practice: information sharing, environmental modification, emotional support, patient and family centred decision-making, symptom management and spiritual support. Descriptive statistics identified a high level of engagement in information sharing and environmental modification practices and less frequent engagement in items from the emotional support and symptom management practice areas. The findings of this study identified domains of end-of-life care practice, and critical care nurse engagement in these practices. The findings highlight future training and practice development opportunities, including the need for experiential learning targeting the emotional support practice domain. Further research is needed to enhance knowledge of symptom management practices during the provision of end-of-life care to inform and improve practice in this area. Copyright © 2015 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved.
Torrents, Genís; Illa, Xavier; Vives, Eduard; Planes, Antoni
2017-01-01
A simple model for the growth of elongated domains (needle-like) during a martensitic phase transition is presented. The model is purely geometric and the only interactions are due to the sequentiality of the kinetic problem and to the excluded volume, since domains cannot retransform back to the original phase. Despite this very simple interaction, numerical simulations show that the final observed microstructure can be described as being a consequence of dipolar-like interactions. The model is analytically solved in 2D for the case in which two symmetry related domains can grow in the horizontal and vertical directions. It is remarkable that the solution is analytic both for a finite system of size L×L and in the thermodynamic limit L→∞, where the elongated domains become lines. Results prove the existence of criticality, i.e., that the domain sizes observed in the final microstructure show a power-law distribution characterized by a critical exponent. The exponent, nevertheless, depends on the relative probabilities of the different equivalent variants. The results provide a plausible explanation of the weak universality of the critical exponents measured during martensitic transformations in metallic alloys. Experimental exponents show a monotonous dependence with the number of equivalent variants that grow during the transition.
Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram
2009-03-01
Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands audiovisual processing both in speech and language treatment and in the diagnosis of oral-facial apraxia. The purpose of this study was to investigate differences in audiovisual perception of speech as compared to non-speech oral gestures. Bimodal and unimodal speech and non-speech items were used and additionally discordant stimuli constructed, which were presented for imitation. This study examined a group of healthy volunteers and a group of patients with lesions of the left hemisphere. Patients made substantially more errors than controls, but the factors influencing imitation accuracy were more or less the same in both groups. Error analyses in both groups suggested different types of representations for speech as compared to the non-speech domain, with speech having a stronger weight on the auditory modality and non-speech processing on the visual modality. Additionally, this study was able to show that the McGurk effect is not limited to speech.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
NASA Astrophysics Data System (ADS)
Sun, H. Y.; Hu, H. N.; Sun, Y. P.; Nie, X. F.
2004-08-01
Influence of rotating in-plane field on vertical Bloch lines in the walls of second kind of dumbbell domains (IIDs) was investigated, and a critical in-plane field range [ Hip1, Hip2] of which vertical-Bloch lines (VBLs) annihilated in IIDs is found under rotating in-plane field ( Hip1 is the maximal critical in-plane-field of which hard domains remain stable, Hip2 is the minimal critical in-plane-field of which all of the hard domains convert to soft bubbles (SBs, without VBLs)). It shows that the in-plane field range [ Hip1, Hip2] changes with the change of the rotating angle Δ ϕ. Hip1 maintains stable, while Hip2 decreases with the decreasing of rotating angle Δ ϕ. Comparing it with the spontaneous shrinking experiment of IIDs under both bias field and in-plane field, we presume that under the application of in-plane field there exists a direction along which the VBLs in the domain walls annihilate most easily, and it is in the direction that domain walls are perpendicular to the in-plane field.
Isospin Breaking Corrections to the HVP with Domain Wall Fermions
NASA Astrophysics Data System (ADS)
Boyle, Peter; Guelpers, Vera; Harrison, James; Juettner, Andreas; Lehner, Christoph; Portelli, Antonin; Sachrajda, Christopher
2018-03-01
We present results for the QED and strong isospin breaking corrections to the hadronic vacuum polarization using Nf = 2 + 1 Domain Wall fermions. QED is included in an electro-quenched setup using two different methods, a stochastic and a perturbative approach. Results and statistical errors from both methods are directly compared with each other.
ERIC Educational Resources Information Center
Boons, Tinne; De Raeve, Leo; Langereis, Margreet; Peeraer, Louis; Wouters, Jan; van Wieringen, Astrid
2013-01-01
Practical experience and research reveal generic spoken language benefits after cochlear implantation. However, systematic research on specific language domains and error analyses are required to probe sub-skills. Moreover, the effect of predictive factors on distinct language domains is unknown. In this study, outcomes of 70 school-aged children…
Coupling finite element and spectral methods: First results
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Debit, Naima; Maday, Yvon
1987-01-01
A Poisson equation on a rectangular domain is solved by coupling two methods: the domain is divided in two squares, a finite element approximation is used on the first square and a spectral discretization is used on the second one. Two kinds of matching conditions on the interface are presented and compared. In both cases, error estimates are proved.
Samsuri, Srima Elina; Pei Lin, Lua; Fahrni, Mathumalar Loganathan
2015-11-26
To assess the safety attitudes of pharmacists, provide a profile of their domains of safety attitude and correlate their attitudes with self-reported rates of medication errors. A cross-sectional study utilising the Safety Attitudes Questionnaire (SAQ). 3 public hospitals and 27 health clinics. 117 pharmacists. Safety culture mean scores, variation in scores across working units and between hospitals versus health clinics, predictors of safety culture, and medication errors and their correlation. Response rate was 83.6% (117 valid questionnaires returned). Stress recognition (73.0±20.4) and working condition (54.8±17.4) received the highest and lowest mean scores, respectively. Pharmacists exhibited positive attitudes towards: stress recognition (58.1%), job satisfaction (46.2%), teamwork climate (38.5%), safety climate (33.3%), perception of management (29.9%) and working condition (15.4%). With the exception of stress recognition, those who worked in health clinics scored higher than those in hospitals (p<0.05) and higher scores (overall score as well as score for each domain except for stress recognition) correlated negatively with reported number of medication errors. Conversely, those working in hospital (versus health clinic) were 8.9 times more likely (p<0.01) to report a medication error (OR 8.9, CI 3.08 to 25.7). As stress recognition increased, the number of medication errors reported increased (p=0.023). Years of work experience (p=0.017) influenced the number of medication errors reported. For every additional year of work experience, pharmacists were 0.87 times less likely to report a medication error (OR 0.87, CI 0.78 to 0.98). A minority (20.5%) of the pharmacists working in hospitals and health clinics was in agreement with the overall SAQ questions and scales. Pharmacists in outpatient and ambulatory units and those in health clinics had better perceptions of safety culture. As perceptions improved, the number of medication errors reported decreased. Group-specific interventions that target specific domains are necessary to improve the safety culture. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Vauhkonen, P J; Vauhkonen, M; Kaipio, J P
2000-02-01
In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.
Good coupling for the multiscale patch scheme on systems with microscale heterogeneity
NASA Astrophysics Data System (ADS)
Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.
2017-05-01
Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.
Causal impulse response for circular sources in viscous media
Kelly, James F.; McGough, Robert J.
2008-01-01
The causal impulse response of the velocity potential for the Stokes wave equation is derived for calculations of transient velocity potential fields generated by circular pistons in viscous media. The causal Green’s function is numerically verified using the material impulse response function approach. The causal, lossy impulse response for a baffled circular piston is then calculated within the near field and the far field regions using expressions previously derived for the fast near field method. Transient velocity potential fields in viscous media are computed with the causal, lossy impulse response and compared to results obtained with the lossless impulse response. The numerical error in the computed velocity potential field is quantitatively analyzed for a range of viscous relaxation times and piston radii. Results show that the largest errors are generated in locations near the piston face and for large relaxation times, and errors are relatively small otherwise. Unlike previous frequency-domain methods that require numerical inverse Fourier transforms for the evaluation of the lossy impulse response, the present approach calculates the lossy impulse response directly in the time domain. The results indicate that this causal impulse response is ideal for time-domain calculations that simultaneously account for diffraction and quadratic frequency-dependent attenuation in viscous media. PMID:18397018
NASA Technical Reports Server (NTRS)
Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.
1989-01-01
The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.
McDonald, Catherine C; Curry, Allison E; Kandadai, Venk; Sommers, Marilyn S; Winston, Flaura K
2014-11-01
Motor vehicle crashes are the leading cause of death and acquired disability during the first four decades of life. While teen drivers have the highest crash risk, few studies examine the similarities and differences in teen and adult driver crashes. We aimed to: (1) identify and compare the most frequent crash scenarios-integrated information on a vehicle's movement prior to crash, immediate pre-crash event, and crash configuration-for teen and adult drivers involved in serious crashes, and (2) for the most frequent scenarios, explore whether the distribution of driver critical errors differed for teens and adult drivers. We analyzed data from the National Motor Vehicle Crash Causation Survey, a nationally representative study of serious crashes conducted by the U.S. National Highway Traffic Safety Administration from 2005 to 2007. Our sample included 642 16- to 19-year-old and 1167 35- to 54-year-old crash-involved drivers (weighted n=296,482 and 439,356, respectively) who made a critical error that led to their crash's critical pre-crash event (i.e., event that made the crash inevitable). We estimated prevalence ratios (PR) and 95% confidence intervals (CI) to compare the relative frequency of crash scenarios and driver critical errors. The top five crash scenarios among teen drivers, accounting for 37.3% of their crashes, included: (1) going straight, other vehicle stopped, rear end; (2) stopped in traffic lane, turning left at intersection, turn into path of other vehicle; (3) negotiating curve, off right edge of road, right roadside departure; (4) going straight, off right edge of road, right roadside departure; and (5) stopped in lane, turning left at intersection, turn across path of other vehicle. The top five crash scenarios among adult drivers, accounting for 33.9% of their crashes, included the same scenarios as the teen drivers with the exception of scenario (3) and the addition of going straight, crossing over an intersection, and continuing on a straight path. For two scenarios ((1) and (3) above), teens were more likely than adults to make a critical decision error (e.g., traveling too fast for conditions). Our findings indicate that among those who make a driver critical error in a serious crash, there are few differences in the scenarios or critical driver errors for teen and adult drivers. Copyright © 2014 Elsevier Ltd. All rights reserved.
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
Laser transit anemometer measurements of a JANNAF nozzle base velocity flow field
NASA Technical Reports Server (NTRS)
Hunter, William W., Jr.; Russ, C. E., Jr.; Clemmons, J. I., Jr.
1990-01-01
Velocity flow fields of a nozzle jet exhausting into a supersonic flow were surveyed. The measurements were obtained with a laser transit anemometer (LTA) system in the time domain with a correlation instrument. The LTA data is transformed into the velocity domain to remove the error that occurs when the data is analyzed in the time domain. The final data is shown in velocity vector plots for positions upstream, downstream, and in the exhaust plane of the jet nozzle.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
The Loyal Opposition Comments on Plan Domain Description Languages
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Golden, Keith; Jonsson, Ari
2003-01-01
In this paper we take a critical look at PDDL 2.1 as designers and users of plan domain description languages. We describe planning domains that have features which are hard to model using PDDL 2.1. We then offer some suggestions on domain description language design, and describe how these suggestions make modeling our chosen domains easier.
Colas, Jaron T; Pauli, Wolfgang M; Larsen, Tobias; Tyszka, J Michael; O'Doherty, John P
2017-10-01
Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models-namely, "actor/critic" models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning.
Mooney, Peter; Purves, Ross S.; Rocchini, Duccio; Walz, Ariane
2016-01-01
Volunteered geographical information (VGI) and citizen science have become important sources data for much scientific research. In the domain of land cover, crowdsourcing can provide a high temporal resolution data to support different analyses of landscape processes. However, the scientists may have little control over what gets recorded by the crowd, providing a potential source of error and uncertainty. This study compared analyses of crowdsourced land cover data that were contributed by different groups, based on nationality (labelled Gondor and Non-Gondor) and on domain experience (labelled Expert and Non-Expert). The analyses used a geographically weighted model to generate maps of land cover and compared the maps generated by the different groups. The results highlight the differences between the maps how specific land cover classes were under- and over-estimated. As crowdsourced data and citizen science are increasingly used to replace data collected under the designed experiment, this paper highlights the importance of considering between group variations and their impacts on the results of analyses. Critically, differences in the way that landscape features are conceptualised by different groups of contributors need to be considered when using crowdsourced data in formal scientific analyses. The discussion considers the potential for variation in crowdsourced data, the relativist nature of land cover and suggests a number of areas for future research. The key finding is that the veracity of citizen science data is not the critical issue per se. Rather, it is important to consider the impacts of differences in the semantics, affordances and functions associated with landscape features held by different groups of crowdsourced data contributors. PMID:27458924
Ghahramanian, Akram; Rezaei, Tayyebeh; Abdullahzadeh, Farahnaz; Sheikhalipour, Zahra; Dianat, Iman
2017-01-01
Background: This study investigated quality of healthcare services from patients’ perspectives and its relationship with patient safety culture and nurse-physician professional communication. Methods: A cross-sectional study was conducted among 300 surgery patients and 101 nurses caring them in a public hospital in Tabriz–Iran. Data were collected using the service quality measurement scale (SERVQUAL), hospital survey on patient safety culture (HSOPSC) and nurse physician professional communication questionnaire. Results: The highest and lowest mean (±SD) scores of the patients’ perception on the healthcare services quality belonged to the assurance 13.92 (±3.55) and empathy 6.78 (±1.88) domains,respectively. With regard to the patient safety culture, the mean percentage of positive answers ranged from 45.87% for "non-punitive response to errors" to 68.21% for "organizational continuous learning" domains. The highest and lowest mean (±SD) scores for the nurse physician professional communication were obtained for "cooperation" 3.44 (±0.35) and "non-participative decision-making" 2.84 (±0.34) domains, respectively. The "frequency of reported errors by healthcare professionals" (B=-4.20, 95% CI = -7.14 to -1.27, P<0.01) and "respect and sharing of information" (B=7.69, 95% CI=4.01 to 11.36, P<0.001) predicted the patients’perceptions of the quality of healthcare services. Conclusion: Organizational culture in dealing with medical error should be changed to non-punitive response. Change in safety culture towards reporting of errors, effective communication and teamwork between healthcare professionals are recommended. PMID:28695106
The culture of patient safety in an Iranian intensive care unit.
Abdi, Zhaleh; Delgoshaei, Bahram; Ravaghi, Hamid; Abbasi, Mohsen; Heyrani, Ali
2015-04-01
To explore nurses' and physicians' attitudes and perceptions relevant to safety culture and to elicit strategies to promote safety culture in an intensive care unit. A strong safety culture is essential to ensure patient safety in the intensive care unit. This case study adopted a mixed method design. The Safety Attitude Questionnaire (SAQ-ICU version), assessing the safety climate through six domains, was completed by nurses and physicians (n = 42) in an academic intensive care unit. Twenty semi-structured interviews and document analyses were conducted as well. Interviews were analysed using a framework analysis method. Mean scores across the six domains ranged from 52.3 to 72.4 on a 100-point scale. Further analysis indicated that there were statistically significant differences between physicians' and nurses' attitudes toward teamwork (mean scores: 64.5/100 vs. 52.6/100, d = 1.15, t = 3.69, P < 0.001) and job satisfaction (mean scores: 78.2/100 vs. 57.7/100, d = 1.5, t = 4.8, P < 0.001). Interviews revealed several safety challenges including underreporting, failure to learn from errors, lack of speaking up, low job satisfaction among nurses and ineffective nurse-physician communication. The results indicate that all the domains need improvements. However, further attention should be devoted to error reporting and analysis, communication and teamwork among professional groups, and nurses' job satisfaction. Nurse managers can contribute to promoting a safety culture by encouraging staff to report errors, fostering learning from errors and addressing inter-professional communication problems. © 2013 John Wiley & Sons Ltd.
Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media
Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.
2009-01-01
Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.
Kienle, A; Patterson, M S
1997-09-01
We investigate theoretically the errors in determining the reduced scattering and absorption coefficients of semi-infinite turbid media from frequency-domain reflectance measurements made at small distances between the source and the detector(s). The errors are due to the uncertainties in the measurement of the phase, the modulation and the steady-state reflectance as well as to the diffusion approximation which is used as a theoretical model to describe light propagation in tissue. Configurations using one and two detectors are examined for the measurement of the phase and the modulation and for the measurement of the phase and the steady-state reflectance. Three solutions of the diffusion equation are investigated. We show that measurements of the phase and the steady-state reflectance at two different distances are best suited for the determination of the optical properties close to the source. For this arrangement the errors in the absorption coefficient due to typical uncertainties in the measurement are greater than those resulting from the application of the diffusion approximation at a modulation frequency of 200 MHz. A Monte Carlo approach is also examined; this avoids the errors due to the diffusion approximation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eltoweissy, Mohamed Y.; Du, David H.C.; Gerla, Mario
Mission-Critical Networking (MCN) refers to networking for application domains where life or livelihood may be at risk. Typical application domains for MCN include critical infrastructure protection and operation, emergency and crisis intervention, healthcare services, and military operations. Such networking is essential for safety, security and economic vitality in our complex world characterized by uncertainty, heterogeneity, emergent behaviors, and the need for reliable and timely response. MCN comprise networking technology, infrastructures and services that may alleviate the risk and directly enable and enhance connectivity for mission-critical information exchange among diverse, widely dispersed, mobile users.
Cognition-Action Trade-Offs Reflect Organization of Attention in Infancy.
Berger, Sarah E; Harbourne, Regina T; Horger, Melissa N
2018-01-01
This chapter discusses what cognition-action trade-offs in infancy reveal about the organization and developmental trajectory of attention. We focus on internal attention because this aspect is most relevant to the immediate concerns of infancy, such as fluctuating levels of expertise, balancing multiple taxing skills simultaneously, learning how to control attention under variable conditions, and coordinating distinct psychological domains. Cognition-action trade-offs observed across the life span include perseveration during skill emergence, errors and inefficient strategies during decision making, and the allocation of resources when attention is taxed. An embodied cognitive-load account interprets these behavioral patterns as a result of limited attentional resources allocated across simultaneous, taxing task demands. For populations where motor errors could be costly, like infants and the elderly, attention is typically devoted to motor demands with errors occurring in the cognitive domain. In contrast, healthy young adults tend to preserve their cognitive performance by modifying their actions. © 2018 Elsevier Inc. All rights reserved.
Pay attention! The critical importance of assessing attention in older adults with dementia.
Kolanowski, Ann M; Fick, Donna M; Yevchak, Andrea M; Hill, Nikki L; Mulhall, Paula M; McDowell, Jane A
2012-11-01
Attention is an important cognitive domain that is affected in Alzheimer's disease and other dementias. It influences performance in most other cognitive domains, as well as activities of daily living. Nurses are often unaware of the critical importance of assessing attention as part of the overall mental status examination. This article addresses an important gap in nurses' knowledge. The authors present a brief overview of attention as a critical cognitive domain in dementia; review instruments/methods for standardizing and enhancing the assessment of attention; and offer ways to help ensure that best practices in the assessment, recognition, and documentation of inattention are implemented in the clinical area. Clinical resources that practicing nurses may find helpful are included. Copyright 2012, SLACK Incorporated.
A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint
NASA Technical Reports Server (NTRS)
Barth, Timothy
2004-01-01
This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.
Learning from Errors: Critical Incident Reporting in Nursing
ERIC Educational Resources Information Center
Gartmeier, Martin; Ottl, Eva; Bauer, Johannes; Berberat, Pascal Oliver
2017-01-01
Purpose: The purpose of this paper is to conceptualize error reporting as a strategy for informal workplace learning and investigate nurses' error reporting cost/benefit evaluations and associated behaviors. Design/methodology/approach: A longitudinal survey study was carried out in a hospital setting with two measurements (time 1 [t1]:…
Adaptive Constructive Processes and the Future of Memory
ERIC Educational Resources Information Center
Schacter, Daniel L.
2012-01-01
Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
A new fictitious domain approach for Stokes equation
NASA Astrophysics Data System (ADS)
Yang, Min
2017-10-01
The purpose of this paper is to present a new fictitious domain approach based on the Nietzsche’s method combining with a penalty method for the Stokes equation. This method allows for an easy and flexible handling of the geometrical aspects. Stability and a priori error estimate are proved. Finally, a numerical experiment is provided to verify the theoretical findings.
Xue, You-Lin; Wang, Hao; Riedy, Michael; Roberts, Brittany-Lee; Sun, Yuna; Song, Yong-Bo; Jones, Gary W; Masison, Daniel C; Song, Youtao
2018-05-01
Genetic screens using Saccharomyces cerevisiae have identified an array of Hsp40 (Ydj1p) J-domain mutants that are impaired in the ability to cure the yeast [URE3] prion through disrupting functional interactions with Hsp70. However, biochemical analysis of some of these Hsp40 J-domain mutants has so far failed to provide major insight into the specific functional changes in Hsp40-Hsp70 interactions. To explore the detailed structural and dynamic properties of the Hsp40 J-domain, 20 ns molecular dynamic simulations of 4 mutants (D9A, D36A, A30T, and F45S) and wild-type J-domain were performed, followed by Hsp70 docking simulations. Results demonstrated that although the Hsp70 interaction mechanism of the mutants may vary, the major structural change was targeted to the critical HPD motif of the J-domain. Our computational analysis fits well with previous yeast genetics studies regarding highlighting the importance of J-domain function in prion propagation. During the molecular dynamics simulations several important residues were identified and predicted to play an essential role in J-domain structure. Among these residues, Y26 and F45 were confirmed, using both in silico and in vivo methods, as being critical for Ydj1p function.
Pediatric Critical Care Nursing Research Priorities-Initiating International Dialogue.
Tume, Lyvonne N; Coetzee, Minette; Dryden-Palmer, Karen; Hickey, Patricia A; Kinney, Sharon; Latour, Jos M; Pedreira, Mavilde L G; Sefton, Gerri R; Sorce, Lauren; Curley, Martha A Q
2015-07-01
To identify and prioritize research questions of concern to the practice of pediatric critical care nursing practice. One-day consensus conference. By using a conceptual framework by Benner et al describing domains of practice in critical care nursing, nine international nurse researchers presented state-of-the-art lectures. Each identified knowledge gaps in their assigned practice domain and then poised three research questions to fill that gap. Then, meeting participants prioritized the proposed research questions using an interactive multivoting process. Seventh World Congress on Pediatric Intensive and Critical Care in Istanbul, Turkey. Pediatric critical care nurses and nurse scientists attending the open consensus meeting. Systematic review, gap analysis, and interactive multivoting. The participants prioritized 27 nursing research questions in nine content domains. The top four research questions were 1) identifying nursing interventions that directly impact the child and family's experience during the withdrawal of life support, 2) evaluating the long-term psychosocial impact of a child's critical illness on family outcomes, 3) articulating core nursing competencies that prevent unstable situations from deteriorating into crises, and 4) describing the level of nursing education and experience in pediatric critical care that has a protective effect on the mortality and morbidity of critically ill children. The consensus meeting was effective in organizing pediatric critical care nursing knowledge, identifying knowledge gaps and in prioritizing nursing research initiatives that could be used to advance nursing science across world regions.
Diffraction-based overlay metrology for double patterning technologies
NASA Astrophysics Data System (ADS)
Dasari, Prasad; Korlahalli, Rahul; Li, Jie; Smith, Nigel; Kritsun, Oleg; Volkman, Cathy
2009-03-01
The extension of optical lithography to 32nm and beyond is made possible by Double Patterning Techniques (DPT) at critical levels of the process flow. The ease of DPT implementation is hindered by increased significance of critical dimension uniformity and overlay errors. Diffraction-based overlay (DBO) has shown to be an effective metrology solution for accurate determination of the overlay errors associated with double patterning [1, 2] processes. In this paper we will report its use in litho-freeze-litho-etch (LFLE) and spacer double patterning technology (SDPT), which are pitch splitting solutions that reduce the significance of overlay errors. Since the control of overlay between various mask/level combinations is critical for fabrication, precise and accurate assessment of errors by advanced metrology techniques such as spectroscopic diffraction based overlay (DBO) and traditional image-based overlay (IBO) using advanced target designs will be reported. A comparison between DBO, IBO and CD-SEM measurements will be reported. . A discussion of TMU requirements for 32nm technology and TMU performance data of LFLE and SDPT targets by different overlay approaches will be presented.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
The current and ideal state of anatomic pathology patient safety.
Raab, Stephen Spencer
2014-01-01
An anatomic pathology diagnostic error may be secondary to a number of active and latent technical and/or cognitive components, which may occur anywhere along the total testing process in clinical and/or laboratory domains. For the pathologist interpretive steps of diagnosis, we examine Kahneman's framework of slow and fast thinking to explain different causes of error in precision (agreement) and in accuracy (truth). The pathologist cognitive diagnostic process involves image pattern recognition and a slow thinking error may be caused by the application of different rationally-constructed mental maps of image criteria/patterns by different pathologists. This type of error is partly related to a system failure in standardizing the application of these maps. A fast thinking error involves the flawed leap from image pattern to incorrect diagnosis. In the ideal state, anatomic pathology systems would target these cognitive error causes as well as the technical latent factors that lead to error.
Problems and pitfalls in cardiac drug therapy.
Stone, S M; Rai, N; Nei, J
2001-01-01
Medical errors in the care of patients may account for 44,000 to 98,000 deaths per year, and 7,000 deaths per year are attributed to medication errors alone. Increasing awareness among health care providers of potential errors is a critical step toward improving the safety of medical care. Because today's medications are increasingly complex, approved at an accelerated rate, and often have a narrow therapeutic window with only a small margin of safety, patient and provider education is critical in assuring optimal therapeutic outcomes. Providers can use electronic resources such as Web sites to keep informed on drug-drug, drug-food, and drug-nutritional supplements interactions.
Optimizing spectral wave estimates with adjoint-based sensitivity maps
NASA Astrophysics Data System (ADS)
Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos
2014-04-01
A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.
He, Pingan; Jagannathan, S
2007-04-01
A novel adaptive-critic-based neural network (NN) controller in discrete time is designed to deliver a desired tracking performance for a class of nonlinear systems in the presence of actuator constraints. The constraints of the actuator are treated in the controller design as the saturation nonlinearity. The adaptive critic NN controller architecture based on state feedback includes two NNs: the critic NN is used to approximate the "strategic" utility function, whereas the action NN is employed to minimize both the strategic utility function and the unknown nonlinear dynamic estimation errors. The critic and action NN weight updates are derived by minimizing certain quadratic performance indexes. Using the Lyapunov approach and with novel weight updates, the uniformly ultimate boundedness of the closed-loop tracking error and weight estimates is shown in the presence of NN approximation errors and bounded unknown disturbances. The proposed NN controller works in the presence of multiple nonlinearities, unlike other schemes that normally approximate one nonlinearity. Moreover, the adaptive critic NN controller does not require an explicit offline training phase, and the NN weights can be initialized at zero or random. Simulation results justify the theoretical analysis.
Kandel, Himal; Khadka, Jyoti; Goggin, Michael; Pesudovs, Konrad
2017-12-01
This review has identified the best existing patient-reported outcome (PRO) instruments in refractive error. The article highlights the limitations of the existing instruments and discusses the way forward. A systematic review was conducted to identify the types of PROs used in refractive error, to determine the quality of the existing PRO instruments in terms of their psychometric properties, and to determine the limitations in the content of the existing PRO instruments. Articles describing a PRO instrument measuring 1 or more domains of quality of life in people with refractive error were identified by electronic searches on the MEDLINE, PubMed, Scopus, Web of Science, and Cochrane databases. The information on content development, psychometric properties, validity, reliability, and responsiveness of those PRO instruments was extracted from the selected articles. The analysis was done based on a comprehensive set of assessment criteria. One hundred forty-eight articles describing 47 PRO instruments in refractive error were included in the review. Most of the articles (99 [66.9%]) used refractive error-specific PRO instruments. The PRO instruments comprised 19 refractive, 12 vision but nonrefractive, and 16 generic PRO instruments. Only 17 PRO instruments were validated in refractive error populations; six of them were developed using Rasch analysis. None of the PRO instruments has items across all domains of quality of life. The Quality of Life Impact of Refractive Correction, the Quality of Vision, and the Contact Lens Impact on Quality of Life have comparatively better quality with some limitations, compared with the other PRO instruments. This review describes the PRO instruments and informs the choice of an appropriate measure in refractive error. We identified need of a comprehensive and scientifically robust refractive error-specific PRO instrument. Item banking and computer-adaptive testing system can be the way to provide such an instrument.
A Critical Comparison of Classical and Domain Theory: Some Implications for Character Education
ERIC Educational Resources Information Center
Keefer, Matthew Wilks
2006-01-01
Contemporary approaches to moral education are influenced by the "domain theory" approach to understanding moral development (Turiel, 1983; 1998; Nucci, 2001). Domain theory holds there are distinct conventional, personal and moral domains; each constituting a cognitive "structured-whole" with its own normative source and sphere of influence. One…
The presence of English and Spanish dyslexia in the Web
NASA Astrophysics Data System (ADS)
Rello, Luz; Baeza-Yates, Ricardo
2012-09-01
In this study we present a lower bound of the prevalence of dyslexia in the Web for English and Spanish. On the basis of analysis of corpora written by dyslexic people, we propose a classification of the different kinds of dyslexic errors. A representative data set of dyslexic words is used to calculate this lower bound in web pages containing English and Spanish dyslexic errors. We also present an analysis of dyslexic errors in major Internet domains, social media sites, and throughout English- and Spanish-speaking countries. To show the independence of our estimations from the presence of other kinds of errors, we compare them with the overall lexical quality of the Web and with the error rate of noncorrected corpora. The presence of dyslexic errors in the Web motivates work in web accessibility for dyslexic users.
J domain independent functions of J proteins.
Ajit Tamadaddi, Chetana; Sahi, Chandan
2016-07-01
Heat shock proteins of 40 kDa (Hsp40s), also called J proteins, are obligate partners of Hsp70s. Via their highly conserved and functionally critical J domain, J proteins interact and modulate the activity of their Hsp70 partners. Mutations in the critical residues in the J domain often result in the null phenotype for the J protein in question. However, as more J proteins have been characterized, it is becoming increasingly clear that a significant number of J proteins do not "completely" rely on their J domains to carry out their cellular functions, as previously thought. In some cases, regions outside the highly conserved J domain have become more important making the J domain dispensable for some, if not for all functions of a J protein. This has profound effects on the evolution of such J proteins. Here we present selected examples of J proteins that perform J domain independent functions and discuss this in the context of evolution of J proteins with dispensable J domains and J-like proteins in eukaryotes.
ERIC Educational Resources Information Center
Leicher, Veronika; Mulder, Regina H.
2016-01-01
Purpose: The purpose of this replication study is to identify relevant individual and contextual factors influencing learning from errors at work and to determine if the predictors for learning activities are the same for the domains of nursing and retail banking. Design/methodology/approach: A cross-sectional replication study was carried out in…
An integrative review of health-related quality of life in patients with critical limb ischaemia.
Monaro, Susan; West, Sandra; Gullick, Janice
2017-10-01
To examine the domains and the domain-specific characteristics within a peripheral arterial disease health-related quality of life framework for their usefulness in defining critical limb ischaemia health-related quality of life. Critical Limb Ischaemia presents a highly individualised set of personal and health circumstances. Treatment options include conservative management, revascularisation or amputation. However, the links between treatment decisions and quality of life require further investigation. The framework for this integrative review was the peripheral arterial disease-specific health-related quality of life domains identified by Treat-Jacobson et al. The literature expanded and refined Treat-Jacobson's framework by modifying the characteristics to better describe health-related quality of life in critical limb ischaemia. Given that critical limb ischaemia is a highly individualised situation with powerful health-related quality of life implications, further research focusing on patient and family-centred decision-making relating to therapeutic options and advanced care planning is required. A critical limb ischaemia-specific, health-related quality of life tool is required to capture both the unique characteristics of this disorder, and the outcomes for active or conservative care among this complex group of patients. © 2016 John Wiley & Sons Ltd.
Planetary Transmission Diagnostics
NASA Technical Reports Server (NTRS)
Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.
2004-01-01
This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
ERIC Educational Resources Information Center
O'Connell, Redmond G.; Bellgrove, Mark A.; Dockree, Paul M.; Lau, Adam; Hester, Robert; Garavan, Hugh; Fitzgerald, Michael; Foxe, John J.; Robertson, Ian H.
2009-01-01
The ability to detect and correct errors is critical to adaptive control of behaviour and represents a discrete neuropsychological function. A number of studies have highlighted that attention-deficit hyperactivity disorder (ADHD) is associated with abnormalities in behavioural and neural responsiveness to performance errors. One limitation of…
Error framing effects on performance: cognitive, motivational, and affective pathways.
Steele-Johnson, Debra; Kalinoski, Zachary T
2014-01-01
Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhou, Xiaoqing; Qin, Zhuanping; Zhao, Huijuan
2011-02-01
This article aims at the development of the fast inverse Monte Carlo (MC) simulation for the reconstruction of optical properties (absorption coefficient μs and scattering coefficient μs) of cylindrical tissue, such as a cervix, from the measurement of near infrared diffuse light on frequency domain. Frequency domain information (amplitude and phase) is extracted from the time domain MC with a modified method. To shorten the computation time in reconstruction of optical properties, efficient and fast forward MC has to be achieved. To do this, firstly, databases of the frequency-domain information under a range of μa and μs were pre-built by combining MC simulation with Lambert-Beer's law. Then, a double polynomial model was adopted to quickly obtain the frequency-domain information in any optical properties. Based on the fast forward MC, the optical properties can be quickly obtained in a nonlinear optimization scheme. Reconstruction resulting from simulated data showed that the developed inverse MC method has the advantages in both the reconstruction accuracy and computation time. The relative errors in reconstruction of the μs and μs are less than +/-6% and +/-12% respectively, while another coefficient (μs or μs) is in a fixed value. When both μs and μs are unknown, the relative errors in reconstruction of the reduced scattering coefficient and absorption coefficient are mainly less than +/-10% in range of 45< μs <80 cm-1 and 0.25< a μ <0.55 cm-1. With the rapid reconstruction strategy developed in this article the computation time for reconstructing one set of the optical properties is less than 0.5 second. Endoscopic measurement on two tubular solid phantoms were also carried out to evaluate the system and the inversion scheme. The results demonstrated that less than 20% relative error can be achieved.
Systems, methods and apparatus for verification of knowledge-based systems
NASA Technical Reports Server (NTRS)
Rash, James L. (Inventor); Gracinin, Denis (Inventor); Erickson, John D. (Inventor); Rouff, Christopher A. (Inventor); Hinchey, Michael G. (Inventor)
2010-01-01
Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.
Sure, Rebecca; Brandenburg, Jan Gerit
2015-01-01
Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2009-01-01
In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan
2017-10-01
An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein
2014-11-15
In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less
Architecture for time or transform domain decoding of reed-solomon codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)
1989-01-01
Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.
Photomask CD and LER characterization using Mueller matrix spectroscopic ellipsometry
NASA Astrophysics Data System (ADS)
Heinrich, A.; Dirnstorfer, I.; Bischoff, J.; Meiner, K.; Ketelsen, H.; Richter, U.; Mikolajick, T.
2014-10-01
Critical dimension and line edge roughness on photomask arrays are determined with Mueller matrix spectroscopic ellipsometry. Arrays with large sinusoidal perturbations are measured for different azimuth angels and compared with simulations based on rigorous coupled wave analysis. Experiment and simulation show that line edge roughness leads to characteristic changes in the different Mueller matrix elements. The influence of line edge roughness is interpreted as an increase of isotropic character of the sample. The changes in the Mueller matrix elements are very similar when the arrays are statistically perturbed with rms roughness values in the nanometer range suggesting that the results on the sinusoidal test structures are also relevant for "real" mask errors. Critical dimension errors and line edge roughness have similar impact on the SE MM measurement. To distinguish between both deviations, a strategy based on the calculation of sensitivities and correlation coefficients for all Mueller matrix elements is shown. The Mueller matrix elements M13/M31 and M34/M43 are the most suitable elements due to their high sensitivities to critical dimension errors and line edge roughness and, at the same time, to a low correlation coefficient between both influences. From the simulated sensitivities, it is estimated that the measurement accuracy has to be in the order of 0.01 and 0.001 for the detection of 1 nm critical dimension error and 1 nm line edge roughness, respectively.
Error detection and reduction in blood banking.
Motschman, T L; Moore, S B
1996-12-01
Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.
Changing the Culture of Academic Medicine: Critical Mass or Critical Actors?
Helitzer, Deborah L; Newbill, Sharon L; Cardinali, Gina; Morahan, Page S; Chang, Shine; Magrane, Diane
2017-05-01
By 2006, women constituted 34% of academic medical faculty, reaching a critical mass. Theoretically, with critical mass, culture and policy supportive of gender equity should be evident. We explore whether having a critical mass of women transforms institutional culture and organizational change. Career development program participants were interviewed to elucidate their experiences in academic health centers (AHCs). Focus group discussions were held with institutional leaders to explore their perceptions about contemporary challenges related to gender and leadership. Content analysis of both data sources revealed points of convergence. Findings were interpreted using the theory of critical mass. Two nested domains emerged: the individual domain included the rewards and personal satisfaction of meaningful work, personal agency, tensions between cultural expectations of family and academic roles, and women's efforts to work for gender equity. The institutional domain depicted the sociocultural environment of AHCs that shaped women's experience, both personally and professionally, lack of institutional strategies to engage women in organizational initiatives, and the influence of one leader on women's ascent to leadership. The predominant evidence from this research demonstrates that the institutional barriers and sociocultural environment continue to be formidable obstacles confronting women, stalling the transformational effects expected from achieving a critical mass of women faculty. We conclude that the promise of critical mass as a turning point for women should be abandoned in favor of "critical actor" leaders, both women and men, who individually and collectively have the commitment and power to create gender-equitable cultures in AHCs.
Analyzing Software Errors in Safety-Critical Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1994-01-01
This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.
Criticality of Adaptive Control Dynamics
NASA Astrophysics Data System (ADS)
Patzelt, Felix; Pawelzik, Klaus
2011-12-01
We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.
The Computer Revolution and Physical Chemistry.
ERIC Educational Resources Information Center
O'Brien, James F.
1989-01-01
Describes laboratory-oriented software programs that are short, time-saving, eliminate computational errors, and not found in public domain courseware. Program availability for IBM and Apple microcomputers is included. (RT)
The association between EMS workplace safety culture and safety outcomes.
Weaver, Matthew D; Wang, Henry E; Fairbanks, Rollin J; Patterson, Daniel
2012-01-01
Prior studies have highlighted wide variation in emergency medical services (EMS) workplace safety culture across agencies. To determine the association between EMS workplace safety culture scores and patient or provider safety outcomes. We administered a cross-sectional survey to EMS workers affiliated with a convenience sample of agencies. We recruited these agencies from a national EMS management organization. We used the EMS Safety Attitudes Questionnaire (EMS-SAQ) to measure workplace safety culture and the EMS Safety Inventory (EMS-SI), a tool developed to capture self-reported safety outcomes from EMS workers. The EMS-SAQ provides reliable and valid measures of six domains: safety climate, teamwork climate, perceptions of management, working conditions, stress recognition, and job satisfaction. A panel of medical directors, emergency medical technicians and paramedics, and occupational epidemiologists developed the EMS-SI to measure self-reported injury, medical errors and adverse events, and safety-compromising behaviors. We used hierarchical linear models to evaluate the association between EMS-SAQ scores and EMS-SI safety outcome measures. Sixteen percent of all respondents reported experiencing an injury in the past three months, four of every 10 respondents reported an error or adverse event (AE), and 89% reported safety-compromising behaviors. Respondents reporting injury scored lower on five of the six domains of safety culture. Respondents reporting an error or AE scored lower for four of the six domains, while respondents reporting safety-compromising behavior had lower safety culture scores for five of the six domains. Individual EMS worker perceptions of workplace safety culture are associated with composite measures of patient and provider safety outcomes. This study is preliminary evidence of the association between safety culture and patient or provider safety outcomes.
The association between EMS workplace safety culture and safety outcomes
Weaver, Matthew D.; Wang, Henry E.; Fairbanks, Rollin J.; Patterson, Daniel
2012-01-01
Objective Prior studies have highlighted wide variation in EMS workplace safety culture across agencies. We sought to determine the association between EMS workplace safety culture scores and patient or provider safety outcomes. Methods We administered a cross-sectional survey to EMS workers affiliated with a convenience sample of agencies. We recruited these agencies from a national EMS management organization. We used the EMS Safety Attitudes Questionnaire (EMS-SAQ) to measure workplace safety culture and the EMS Safety Inventory (EMS-SI), a tool developed to capture self-reported safety outcomes from EMS workers. The EMS-SAQ provides reliable and valid measures of six domains: safety climate, teamwork climate, perceptions of management, perceptions of working conditions, stress recognition, and job satisfaction. A panel of medical directors, paramedics, and occupational epidemiologists developed the EMS-SI to measure self-reported injury, medical errors and adverse events, and safety-compromising behaviors. We used hierarchical linear models to evaluate the association between EMS-SAQ scores and EMS-SI safety outcome measures. Results Sixteen percent of all respondents reported experiencing an injury in the past 3 months, four of every 10 respondents reported an error or adverse event (AE), and 90% reported safety-compromising behaviors. Respondents reporting injury scored lower on 5 of the 6 domains of safety culture. Respondents reporting an error or AE scored lower for 4 of the 6 domains, while respondents reporting safety-compromising behavior had lower safety culture scores for 5 of 6 domains. Conclusions Individual EMS worker perceptions of workplace safety culture are associated with composite measures of patient and provider safety outcomes. This study is preliminary evidence of the association between safety culture and patient or provider safety outcomes. PMID:21950463
Neural evidence for description dependent reward processing in the framing effect.
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the "worse than expected" negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to "better than expected" positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect.
Calibration and filtering strategies for frequency domain electromagnetic data
Minsley, Burke J.; Smith, Bruce D.; Hammack, Richard; Sams, James I.; Veloski, Garret
2010-01-01
echniques for processing frequency-domain electromagnetic (FDEM) data that address systematic instrument errors and random noise are presented, improving the ability to invert these data for meaningful earth models that can be quantitatively interpreted. A least-squares calibration method, originally developed for airborne electromagnetic datasets, is implemented for a ground-based survey in order to address systematic instrument errors, and new insights are provided into the importance of calibration for preserving spectral relationships within the data that lead to more reliable inversions. An alternative filtering strategy based on principal component analysis, which takes advantage of the strong correlation observed in FDEM data, is introduced to help address random noise in the data without imposing somewhat arbitrary spatial smoothing.Read More: http://library.seg.org/doi/abs/10.4133/1.3445431
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Effect of bird maneuver on frequency-domain helicopter EM response
Fitterman, D.V.; Yin, C.
2004-01-01
Bird maneuver, the rotation of the coil-carrying instrument pod used for frequency-domain helicopter electromagnetic surveys, changes the nominal geometric relationship between the bird-coil system and the ground. These changes affect electromagnetic coupling and can introduce errors in helicopter electromagnetic, (HEM) data. We analyze these effects for a layered half-space for three coil configurations: vertical coaxial, vertical coplanar, and horizontal coplanar. Maneuver effect is shown to have two components: one that is purely geometric and another that is inductive in nature. The geometric component is significantly larger. A correction procedure is developed using an iterative approach that uses standard HEM inversion routines. The maneuver effect correction reduces inversion misfit error and produces laterally smoother cross sections than obtained from uncorrected data. ?? 2004 Society of Exploration Geophysicists. All rights reserved.
Astigmatism and early academic readiness in preschool children.
Orlansky, Gale; Wilmer, Jeremy; Taub, Marc B; Rutner, Daniella; Ciner, Elise; Gryczynski, Jan
2015-03-01
This study investigated the relationship between uncorrected astigmatism and early academic readiness in at-risk preschool-aged children. A vision screening and academic records review were performed on 122 three- to five-year-old children enrolled in the Philadelphia Head Start program. Vision screening results were related to two measures of early academic readiness, the teacher-reported Work Sampling System (WSS) and the parent-reported Ages and Stages Questionnaire (ASQ). Both measures assess multiple developmental and skill domains thought to be related to academic readiness. Children with astigmatism (defined as >|-0.25| in either eye) were compared with children who had no astigmatism. Associations between astigmatism and specific subscales of the WSS and ASQ were examined using parametric and nonparametric bivariate statistics and regression analyses controlling for age and spherical refractive error. Presence of astigmatism was negatively associated with multiple domains of academic readiness. Children with astigmatism had significantly lower mean scores on Personal and Social Development, Language and Literacy, and Physical Development domains of the WSS, and on Personal/Social, Communication, and Fine Motor domains of the ASQ. These differences between children with astigmatism and children with no astigmatism persisted after statistically adjusting for age and magnitude of spherical refractive error. Nonparametric tests corroborated these findings for the Language and Literacy and Physical Health and Development domains of the WSS and the Communication domain of the ASQ. The presence of astigmatism detected in a screening setting was associated with a pattern of reduced academic readiness in multiple developmental and educational domains among at-risk preschool-aged children. This study may help to establish the role of early vision screenings, comprehensive vision examinations, and the need for refractive correction to improve academic success in preschool children.
Multiple Interactions between Cytoplasmic Domains Regulate Slow Deactivation of Kv11.1 Channels*
Ng, Chai Ann; Phan, Kevin; Hill, Adam P.; Vandenberg, Jamie I.; Perry, Matthew D.
2014-01-01
The intracellular domains of many ion channels are important for fine-tuning their gating kinetics. In Kv11.1 channels, the slow kinetics of channel deactivation, which are critical for their function in the heart, are largely regulated by the N-terminal N-Cap and Per-Arnt-Sim (PAS) domains, as well as the C-terminal cyclic nucleotide-binding homology (cNBH) domain. Here, we use mutant cycle analysis to probe for functional interactions between the N-Cap/PAS domains and the cNBH domain. We identified a specific and stable charge-charge interaction between Arg56 of the PAS domain and Asp803 of the cNBH domain, as well an additional interaction between the cNBH domain and the N-Cap, both of which are critical for maintaining slow deactivation kinetics. Furthermore, we found that positively charged arginine residues within the disordered region of the N-Cap interact with negatively charged residues of the C-linker domain. Although this interaction is likely more transient than the PAS-cNBD interaction, it is strong enough to stabilize the open conformation of the channel and thus slow deactivation. These findings provide novel insights into the slow deactivation mechanism of Kv11.1 channels. PMID:25074935
Munguia, Audelia; Federspiel, Mark J
2008-11-01
We recently identified and cloned the receptor for subgroup C avian sarcoma and leukosis viruses [ASLV(C)], i.e., Tvc, a protein most closely related to mammalian butyrophilins, which are members of the immunoglobulin protein family. The extracellular domain of Tvc contains two immunoglobulin-like domains, IgV and IgC, which presumably each contain a disulfide bond important for native function of the protein. In this study, we have begun to identify the functional determinants of Tvc responsible for ASLV(C) receptor activity. We found that the IgV domain of the Tvc receptor is responsible for interacting with the glycoprotein of ASLV(C). Additional experiments demonstrated that a domain was necessary as a spacer between the IgV domain and the membrane-spanning domain for efficient Tvc receptor activity, most likely to orient the IgV domain a proper distance from the cell membrane. The effects on ASLV(C) glycoprotein binding and infection efficiency were also studied by site-directed mutagenesis of the cysteine residues of Tvc as well as conserved amino acid residues of the IgV Tvc domain compared to other IgV domains. In this initial analysis of Tvc determinants important for interacting with ASLV(C) glycoproteins, at least two aromatic amino acid residues in the IgV domain of Tvc, Trp-48 and Tyr-105, were identified as critical for efficient ASLV(C) infection. Interestingly, one or more aromatic amino acid residues have been identified as critical determinants in the other ASLV(A-E) receptors for a proper interaction with ASLV glycoproteins. This suggests that the ASLV glycoproteins may share a common mechanism of receptor interaction with an aromatic residue(s) on the receptor critical for triggering conformational changes in SU that initiate the fusion process required for efficient virus infection.
Munguia, Audelia; Federspiel, Mark J.
2008-01-01
We recently identified and cloned the receptor for subgroup C avian sarcoma and leukosis viruses [ASLV(C)], i.e., Tvc, a protein most closely related to mammalian butyrophilins, which are members of the immunoglobulin protein family. The extracellular domain of Tvc contains two immunoglobulin-like domains, IgV and IgC, which presumably each contain a disulfide bond important for native function of the protein. In this study, we have begun to identify the functional determinants of Tvc responsible for ASLV(C) receptor activity. We found that the IgV domain of the Tvc receptor is responsible for interacting with the glycoprotein of ASLV(C). Additional experiments demonstrated that a domain was necessary as a spacer between the IgV domain and the membrane-spanning domain for efficient Tvc receptor activity, most likely to orient the IgV domain a proper distance from the cell membrane. The effects on ASLV(C) glycoprotein binding and infection efficiency were also studied by site-directed mutagenesis of the cysteine residues of Tvc as well as conserved amino acid residues of the IgV Tvc domain compared to other IgV domains. In this initial analysis of Tvc determinants important for interacting with ASLV(C) glycoproteins, at least two aromatic amino acid residues in the IgV domain of Tvc, Trp-48 and Tyr-105, were identified as critical for efficient ASLV(C) infection. Interestingly, one or more aromatic amino acid residues have been identified as critical determinants in the other ASLV(A-E) receptors for a proper interaction with ASLV glycoproteins. This suggests that the ASLV glycoproteins may share a common mechanism of receptor interaction with an aromatic residue(s) on the receptor critical for triggering conformational changes in SU that initiate the fusion process required for efficient virus infection. PMID:18768966
Errors in Science and Their Treatment in Teaching Science
ERIC Educational Resources Information Center
Kipnis, Nahum
2011-01-01
This paper analyses the real origin and nature of scientific errors against claims of science critics, by examining a number of examples from the history of electricity and optics. This analysis leads to a conclusion that errors are a natural and unavoidable part of scientific process. If made available to students, through their science teachers,…
Idea Evaluation: Error in Evaluating Highly Original Ideas
ERIC Educational Resources Information Center
Licuanan, Brian F.; Dailey, Lesley R.; Mumford, Michael D.
2007-01-01
Idea evaluation is a critical aspect of creative thought. However, a number of errors might occur in the evaluation of new ideas. One error commonly observed is the tendency to underestimate the originality of truly novel ideas. In the present study, an attempt was made to assess whether analysis of the process leading to the idea generation and…
NASA Technical Reports Server (NTRS)
Izygon, Michel
1992-01-01
This report summarizes the findings and lessons learned from the development of an intelligent user interface for a space flight planning simulation program, in the specific area related to constraint-checking. The different functionalities of the Graphical User Interface part and of the rule-based part of the system have been identified. Their respective domain of applicability for error prevention and error checking have been specified.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Improving Drive Files for Vehicle Road Simulations
NASA Astrophysics Data System (ADS)
Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil
2001-09-01
Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.
Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.
2002-01-01
Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.
Moreira, Maria E; Hernandez, Caleb; Stevens, Allen D; Jones, Seth; Sande, Margaret; Blumen, Jason R; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S
2015-08-01
The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Characterizing SH2 Domain Specificity and Network Interactions Using SPOT Peptide Arrays.
Liu, Bernard A
2017-01-01
Src Homology 2 (SH2) domains are protein interaction modules that recognize and bind tyrosine phosphorylated ligands. Their ability to distinguish binding to over thousands of potential phosphotyrosine (pTyr) ligands within the cell is critical for the fidelity of receptor tyrosine kinase (RTK) signaling. Within humans there are over a hundred SH2 domains with more than several thousand potential ligands across many cell types and cell states. Therefore, defining the specificity of individual SH2 domains is critical for predicting and identifying their physiological ligands. Here, in this chapter, I describe the broad use of SPOT peptide arrays for examining SH2 domain specificity. An orientated peptide array library (OPAL) approach can uncover both favorable and non-favorable residues, thus providing an in-depth analysis to SH2 specificity. Moreover, I discuss the application of SPOT arrays for paneling SH2 ligand binding with physiological peptides.
Frequency-domain full-waveform inversion with non-linear descent directions
NASA Astrophysics Data System (ADS)
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.
Kv7.1 ion channels require a lipid to couple voltage sensing to pore opening.
Zaydman, Mark A; Silva, Jonathan R; Delaloye, Kelli; Li, Yang; Liang, Hongwu; Larsson, H Peter; Shi, Jingyi; Cui, Jianmin
2013-08-06
Voltage-gated ion channels generate dynamic ionic currents that are vital to the physiological functions of many tissues. These proteins contain separate voltage-sensing domains, which detect changes in transmembrane voltage, and pore domains, which conduct ions. Coupling of voltage sensing and pore opening is critical to the channel function and has been modeled as a protein-protein interaction between the two domains. Here, we show that coupling in Kv7.1 channels requires the lipid phosphatidylinositol 4,5-bisphosphate (PIP2). We found that voltage-sensing domain activation failed to open the pore in the absence of PIP2. This result is due to loss of coupling because PIP2 was also required for pore opening to affect voltage-sensing domain activation. We identified a critical site for PIP2-dependent coupling at the interface between the voltage-sensing domain and the pore domain. This site is actually a conserved lipid-binding site among different K(+) channels, suggesting that lipids play an important role in coupling in many ion channels.
Baykaner, Khan Richard; Huckvale, Mark; Whiteley, Iya; Andreeva, Svetlana; Ryumin, Oleg
2015-01-01
Automatic systems for estimating operator fatigue have application in safety-critical environments. A system which could estimate level of fatigue from speech would have application in domains where operators engage in regular verbal communication as part of their duties. Previous studies on the prediction of fatigue from speech have been limited because of their reliance on subjective ratings and because they lack comparison to other methods for assessing fatigue. In this paper, we present an analysis of voice recordings and psychophysiological test scores collected from seven aerospace personnel during a training task in which they remained awake for 60 h. We show that voice features and test scores are affected by both the total time spent awake and the time position within each subject's circadian cycle. However, we show that time spent awake and time-of-day information are poor predictors of the test results, while voice features can give good predictions of the psychophysiological test scores and sleep latency. Mean absolute errors of prediction are possible within about 17.5% for sleep latency and 5-12% for test scores. We discuss the implications for the use of voice as a means to monitor the effects of fatigue on cognitive performance in practical applications.
Baykaner, Khan Richard; Huckvale, Mark; Whiteley, Iya; Andreeva, Svetlana; Ryumin, Oleg
2015-01-01
Automatic systems for estimating operator fatigue have application in safety-critical environments. A system which could estimate level of fatigue from speech would have application in domains where operators engage in regular verbal communication as part of their duties. Previous studies on the prediction of fatigue from speech have been limited because of their reliance on subjective ratings and because they lack comparison to other methods for assessing fatigue. In this paper, we present an analysis of voice recordings and psychophysiological test scores collected from seven aerospace personnel during a training task in which they remained awake for 60 h. We show that voice features and test scores are affected by both the total time spent awake and the time position within each subject’s circadian cycle. However, we show that time spent awake and time-of-day information are poor predictors of the test results, while voice features can give good predictions of the psychophysiological test scores and sleep latency. Mean absolute errors of prediction are possible within about 17.5% for sleep latency and 5–12% for test scores. We discuss the implications for the use of voice as a means to monitor the effects of fatigue on cognitive performance in practical applications. PMID:26380259
The critical domain size of stochastic population models.
Reimer, Jody R; Bonsall, Michael B; Maini, Philip K
2017-02-01
Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.
Digital filtering of plume emission spectra
NASA Technical Reports Server (NTRS)
Madzsar, George C.
1990-01-01
Fourier transformation and digital filtering techniques were used to separate the superpositioned spectral phenomena observed in the exhaust plumes of liquid propellant rocket engines. Space shuttle main engine (SSME) spectral data were used to show that extraction of spectral lines in the spatial frequency domain does not introduce error, and extraction of the background continuum introduces only minimal error. Error introduced during band extraction could not be quantified due to poor spectrometer resolution. Based on the atomic and molecular species found in the SSME plume, it was determined that spectrometer resolution must be 0.03 nm for SSME plume spectral monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Han; Rahman, Sadia; Li, Wen
2015-03-27
A novel domain, GATE (Glycine-loop And Transducer Element), is identified in the ABC protein DrrA. This domain shows sequence and structural conservation among close homologs of DrrA as well as distantly-related ABC proteins. Among the highly conserved residues in this domain are three glycines, G215, G221 and G231, of which G215 was found to be critical for stable expression of the DrrAB complex. Other conserved residues, including E201, G221, K227 and G231, were found to be critical for the catalytic and transport functions of the DrrAB transporter. Structural analysis of both the previously published crystal structure of the DrrA homologmore » MalK and the modeled structure of DrrA showed that G215 makes close contacts with residues in and around the Walker A motif, suggesting that these interactions may be critical for maintaining the integrity of the ATP binding pocket as well as the complex. It is also shown that G215A or K227R mutation diminishes some of the atomic interactions essential for ATP catalysis and overall transport function. Therefore, based on both the biochemical and structural analyses, it is proposed that the GATE domain, located outside of the previously identified ATP binding and hydrolysis motifs, is an additional element involved in ATP catalysis. - Highlights: • A novel domain ‘GATE’ is identified in the ABC protein DrrA. • GATE shows high sequence and structural conservation among diverse ABC proteins. • GATE is located outside of the previously studied ATP binding and hydrolysis motifs. • Conserved GATE residues are critical for stability of DrrAB and for ATP catalysis.« less
Different domains are critical for oligomerization compatibility of different connexins
MARTÍNEZ, Agustín D.; MARIPILLÁN, Jaime; ACUÑA, Rodrigo; MINOGUE, Peter J.; BERTHOUD, Viviana M.; BEYER, Eric C.
2011-01-01
Oligomerization of connexins is a critical step in gap junction channel formation. Some members of the connexin family can oligomerize with other members and form functional heteromeric hemichannels [e.g. Cx43 (connexin 43) and Cx45], but others are incompatible (e.g. Cx43 and Cx26). To find connexin domains important for oligomerization, we constructed chimaeras between Cx43 and Cx26 and studied their ability to oligomerize with wild-type Cx43, Cx45 or Cx26. HeLa cells co-expressing Cx43, Cx45 or Cx26 and individual chimaeric constructs were analysed for interactions between the chimaeras and the wild-type connexins using cell biological (subcellular localization by immunofluorescence), functional (intercellular diffusion of microinjected Lucifer yellow) and biochemical (sedimentation velocity through sucrose gradients) assays. All of the chimaeras containing the third transmembrane domain of Cx43 interacted with wild-type Cx43 on the basis of co-localization, dominant-negative inhibition of intercellular communication, and altered sedimentation velocity. The same chimaeras also interacted with co-expressed Cx45. In contrast, immunofluorescence and intracellular diffusion of tracer suggested that other domains influenced oligomerization compatibility when chimaeras were co-expressed with Cx26. Taken together, these results suggest that amino acids in the third transmembrane domain are critical for oligomerization with Cx43 and Cx45. However, motifs in different domains may determine oligomerization compatibility in members of different connexin subfamilies. PMID:21348854
Gauvin, Hanna S; De Baene, Wouter; Brass, Marcel; Hartsuiker, Robert J
2016-02-01
To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated whether internal verbal monitoring takes place through the speech perception system, as proposed by perception-based theories of speech monitoring, or whether mechanisms independent of perception are applied, as proposed by production-based theories of speech monitoring. With the use of fMRI during a tongue twister task we observed that error detection in internal speech during noise-masked overt speech production and error detection in speech perception both recruit the same neural network, which includes pre-supplementary motor area (pre-SMA), dorsal anterior cingulate cortex (dACC), anterior insula (AI), and inferior frontal gyrus (IFG). Although production and perception recruit similar areas, as proposed by perception-based accounts, we did not find activation in superior temporal areas (which are typically associated with speech perception) during internal speech monitoring in speech production as hypothesized by these accounts. On the contrary, results are highly compatible with a domain general approach to speech monitoring, by which internal speech monitoring takes place through detection of conflict between response options, which is subsequently resolved by a domain general executive center (e.g., the ACC). Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2017-01-01
This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].
Yang, Chao-Bo; He, Ping; Escofet-Martin, David; Peng, Jiang-Bo; Fan, Rong-Wei; Yu, Xin; Dunn-Rankin, Derek
2018-01-10
In this paper, three ultrashort-pulse coherent anti-Stokes Raman scattering (CARS) thermometry approaches are summarized with a theoretical time-domain model. The difference between the approaches can be attributed to variations in the input field characteristics of the time-domain model. That is, all three approaches of ultrashort-pulse (CARS) thermometry can be simulated with the unified model by only changing the input fields features. As a specific example, the hybrid femtosecond/picosecond CARS is assessed for its use in combustion flow diagnostics; thus, the examination of the input field has an impact on thermometry focuses on vibrational hybrid femtosecond/picosecond CARS. Beginning with the general model of ultrashort-pulse CARS, the spectra with different input field parameters are simulated. To analyze the temperature measurement error brought by the input field impacts, the spectra are fitted and compared to fits, with the model neglecting the influence introduced by the input fields. The results demonstrate that, however the input pulses are depicted, temperature errors still would be introduced during an experiment. With proper field characterization, however, the significance of the error can be reduced.
Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi
2015-10-01
The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.
NASA Astrophysics Data System (ADS)
Cole, Matthew O. T.; Shinonawanik, Praween; Wongratanaphisan, Theeraphong
2018-05-01
Structural flexibility can impact negatively on machine motion control systems by causing unmeasured positioning errors and vibration at locations where accurate motion is important for task execution. To compensate for these effects, command signal prefiltering may be applied. In this paper, a new FIR prefilter design method is described that combines finite-time vibration cancellation with dynamic compensation properties. The time-domain formulation exploits the relation between tracking error and the moment values of the prefilter impulse response function. Optimal design solutions for filters having minimum H2 norm are derived and evaluated. The control approach does not require additional actuation or sensing and can be effective even without complete and accurate models of the machine dynamics. Results from implementation and testing on an experimental high-speed manipulator having a Delta robot architecture with directionally compliant end-effector are presented. The results show the importance of prefilter moment values for tracking performance and confirm that the proposed method can achieve significant reductions in both peak and RMS tracking error, as well as settling time, for complex motion patterns.
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Passive acquisition of CLIPS rules
NASA Technical Reports Server (NTRS)
Kovarik, Vincent J., Jr.
1991-01-01
The automated acquisition of knowledge by machine has not lived up to expectations, and knowledge engineering remains a human intensive task. Part of the reason for the lack of success is the difference in the cognitive focus of the expert. The expert must shift his or her focus from the subject domain to that of the representation environment. In doing so this cognitive shift introduces opportunity for errors and omissions. Presented here is work that observes the expert interact with a simulation of the domain. The system logs changes in the simulation objects and the expert's actions in response to those changes. This is followed by the application of inductive reasoning to move the domain specific rules observed to general domain rules.
NASA Astrophysics Data System (ADS)
Tang, L.; Hossain, F.
2009-12-01
Understanding the error characteristics of satellite rainfall data at different spatial/temporal scales is critical, especially when the scheduled Global Precipitation Mission (GPM) plans to provide High Resolution Precipitation Products (HRPPs) at global scales. Satellite rainfall data contain errors which need ground validation (GV) data for characterization, while satellite rainfall data will be most useful in the regions that are lacking in GV. Therefore, a critical step is to develop a spatial interpolation scheme for transferring the error characteristics of satellite rainfall data from GV regions to Non-GV regions. As a prelude to GPM, The TRMM Multi-satellite Precipitation Analysis (TMPA) products of 3B41RT and 3B42RT (Huffman et al., 2007) over the US spanning a record of 6 years are used as a representative example of satellite rainfall data. Next Generation Radar (NEXRAD) Stage IV rainfall data are used as the reference for GV data. Initial work by the authors (Tang et al., 2009, GRL) has shown promise in transferring error from GV to Non-GV regions, based on a six-year climatologic average of satellite rainfall data assuming only 50% of GV coverage. However, this transfer of error characteristics needs to be investigated for a range of GV data coverage. In addition, it is also important to investigate if proxy-GV data from an accurate space-borne sensor, such as the TRMM PR (or the GPM DPR), can be leveraged for the transfer of error at sparsely gauged regions. The specific question we ask in this study is, “what is the minimum coverage of GV data required for error transfer scheme to be implemented at acceptable accuracy in hydrological relevant scale?” Three geostatistical interpolation methods are compared: ordinary kriging, indicator kriging and disjunctive kriging. Various error metrics are assessed for transfer such as, Probability of Detection for rain and no rain, False Alarm Ratio, Frequency Bias, Critical Success Index, RMSE etc. Understanding the proper space-time scales at which these metrics can be reasonably transferred is also explored in this study. Keyword: Satellite rainfall, error transfer, spatial interpolation, kriging methods.
ERIC Educational Resources Information Center
Torrance, E. Paul; And Others
This task group report is one of a series prepared by eminent psychologists who have served as consultants in the U.S. Office of Education-sponsored grant study to conduct a Critical Appraisal of the Personality-Emotion-motivation Domain. In order to achieve the goal of identifying important problems and areas for new research and methodological…
Some Challenges in the Design of Human-Automation Interaction for Safety-Critical Systems
NASA Technical Reports Server (NTRS)
Feary, Michael S.; Roth, Emilie
2014-01-01
Increasing amounts of automation are being introduced to safety-critical domains. While the introduction of automation has led to an overall increase in reliability and improved safety, it has also introduced a class of failure modes, and new challenges in risk assessment for the new systems, particularly in the assessment of rare events resulting from complex inter-related factors. Designing successful human-automation systems is challenging, and the challenges go beyond good interface development (e.g., Roth, Malin, & Schreckenghost 1997; Christoffersen & Woods, 2002). Human-automation design is particularly challenging when the underlying automation technology generates behavior that is difficult for the user to anticipate or understand. These challenges have been recognized in several safety-critical domains, and have resulted in increased efforts to develop training, procedures, regulations and guidance material (CAST, 2008, IAEA, 2001, FAA, 2013, ICAO, 2012). This paper points to the continuing need for new methods to describe and characterize the operational environment within which new automation concepts are being presented. We will describe challenges to the successful development and evaluation of human-automation systems in safety-critical domains, and describe some approaches that could be used to address these challenges. We will draw from experience with the aviation, spaceflight and nuclear power domains.
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Engelmann, Brett W
2017-01-01
The Src Homology 2 (SH2) domain family primarily recognizes phosphorylated tyrosine (pY) containing peptide motifs. The relative affinity preferences among competing SH2 domains for phosphopeptide ligands define "specificity space," and underpins many functional pY mediated interactions within signaling networks. The degree of promiscuity exhibited and the dynamic range of affinities supported by individual domains or phosphopeptides is best resolved by a carefully executed and controlled quantitative high-throughput experiment. Here, I describe the fabrication and application of a cellulose-peptide conjugate microarray (CPCMA) platform to the quantitative analysis of SH2 domain specificity space. Included herein are instructions for optimal experimental design with special attention paid to common sources of systematic error, phosphopeptide SPOT synthesis, microarray fabrication, analyte titrations, data capture, and analysis.
Nature of nursing errors and their contributing factors in intensive care units.
Eltaybani, Sameh; Mohamed, Nadia; Abdelwareth, Mona
2018-04-27
Errors tend to be multifactorial and so learning from nurses' experiences with them would be a powerful tool toward promoting patient safety. To identify the nature of nursing errors and their contributing factors in intensive care units (ICUs). A semi-structured interview with 112 critical care nurses to elicit the reports about their encountered errors followed by a content analysis. A total of 300 errors were reported. Most of them (94·3%) were classified in more than one error category, e.g. 'lack of intervention', 'lack of attentiveness' and 'documentation errors': these were the most frequently involved error categories. Approximately 40% of reported errors contributed to significant harm or death of the involved patients, with system-related factors being involved in 84·3% of them. More errors occur during the evening shift than the night and morning shifts (42·7% versus 28·7% and 16·7%, respectively). There is a statistically significant relation (p ≤ 0·001) between error disclosure to a nursing supervisor and its impact on the patient. Nurses are more likely to report their errors when they feel safe and when the reporting system is not burdensome, although an internationally standardized language to define and analyse nursing errors is needed. Improving the health care system, particularly the managerial and environmental aspects, might reduce nursing errors in ICUs in terms of their incidence and seriousness. Targeting error-liable times in the ICU, such as mid-evening and mid-night shifts, along with improved supervision and adequate staff reallocation, might tackle the incidence and seriousness of nursing errors. Development of individualized nursing interventions for patients with low health literacy and patients in isolation might create more meaningful dialogue for ICU health care safety. © 2018 British Association of Critical Care Nurses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirouac, Kevin N.; Ling, Hong; UWO)
Human DNA polymerase iota (pol iota) is a unique member of Y-family polymerases, which preferentially misincorporates nucleotides opposite thymines (T) and halts replication at T bases. The structural basis of the high error rates remains elusive. We present three crystal structures of pol complexed with DNA containing a thymine base, paired with correct or incorrect incoming nucleotides. A narrowed active site supports a pyrimidine to pyrimidine mismatch and excludes Watson-Crick base pairing by pol. The template thymine remains in an anti conformation irrespective of incoming nucleotides. Incoming ddATP adopts a syn conformation with reduced base stacking, whereas incorrect dGTP andmore » dTTP maintain anti conformations with normal base stacking. Further stabilization of dGTP by H-bonding with Gln59 of the finger domain explains the preferential T to G mismatch. A template 'U-turn' is stabilized by pol and the methyl group of the thymine template, revealing the structural basis of T stalling. Our structural and domain-swapping experiments indicate that the finger domain is responsible for pol's high error rates on pyrimidines and determines the incorporation specificity.« less
NASA Astrophysics Data System (ADS)
Tsai, Chin-Chung
2001-07-01
Recently, educators have focused on students' internal control of learning. Epistemological commitments, metacognition, and critical thinking are relevant considerations when addressing this topic. This paper explores the relationships among these domains as a theoretical framework for enhancing chemistry education. The framework shows that these domains share many commonalities. For example, they all focus on learners' self-reflection and they all are rooted in the constructivist theory. This paper further proposes a role for Internet technology in helping students develop appropriate epistemological commitments, metacognitive skills, and critical thinking.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Stevens, Allen D.; Hernandez, Caleb; Jones, Seth; Moreira, Maria E.; Blumen, Jason R.; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S.
2016-01-01
Background Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. Methods We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded-syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28–39) seconds and 42 (95% CI: 36–51) seconds, respectively (difference = 9 [95% CI: 4–14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference = 39%, 95% CI: 13–61%). Conclusions A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. PMID:26247145
Stevens, Allen D; Hernandez, Caleb; Jones, Seth; Moreira, Maria E; Blumen, Jason R; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S
2015-11-01
Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28-39) seconds and 42 (95% CI: 36-51) seconds, respectively (difference=9 [95% CI: 4-14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference=39%, 95% CI: 13-61%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.
A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less
Interferometry On Grazing Incidence Optics
NASA Astrophysics Data System (ADS)
Geary, Joseph; Maeda, Riki
1988-08-01
A preliminary interferometric procedure is described showing potential for obtaining surface figure error maps of grazing incidence optics at normal incidence. The latter are found in some laser resonator configurations, and in Wolter type X-ray optics. The procedure makes use of cylindrical wavefronts and error subtraction techniques over subapertures. The surface error maps obtained will provide critical information to opticians in the fabrication process.
Interferometry on grazing incidence optics
NASA Astrophysics Data System (ADS)
Geary, Joseph M.; Maeda, Riki
1987-12-01
An interfeormetric procedure is described that shows potential for obtaining surface figure error maps of grazing incidence optics at normal incidence. Such optics are found in some laser resonator configurations and in Wolter-type X-ray optics. The procedure makes use of cylindrical wavefronts and error subtraction techniques over subapertures. The surface error maps obtained will provide critical information to opticians for the fabrication process.
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations
NASA Technical Reports Server (NTRS)
Parrott, M. H.; Hinze, W. J.; Braile, L. W.; Vonfrese, R. R. B.
1985-01-01
Flat-Earth modeling is a desirable alternative to the complex spherical-Earth modeling process. These methods were compared using 2 1/2 dimensional flat-earth and spherical modeling to compute gravity and scalar magnetic anomalies along profiles perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Comparison was achieved with percent error computations (spherical-flat/spherical) at critical anomaly points. At the peak gravity anomaly value, errors are less than + or - 5% for all prisms. At 1/2 and 1/10 of the peak, errors are generally less than 10% and 40% respectively, increasing to these values with longer and wider prisms at higher altitudes. For magnetics, the errors at critical anomaly points are less than -10% for all prisms, attaining these magnitudes with longer and wider prisms at higher altitudes. In general, in both gravity and magnetic modeling, errors increase greatly for prisms wider than 500 km, although gravity modeling is more sensitive than magnetic modeling to spherical-Earth effects. Preliminary modeling of both satellite gravity and magnetic anomalies using flat-Earth assumptions is justified considering the errors caused by uncertainties in isolating anomalies.
Filtered Push: Annotating Distributed Data for Quality Control and Fitness for Use Analysis
NASA Astrophysics Data System (ADS)
Morris, P. J.; Kelly, M. A.; Lowery, D. B.; Macklin, J. A.; Morris, R. A.; Tremonte, D.; Wang, Z.
2009-12-01
The single greatest problem with the federation of scientific data is the assessment of the quality and validity of the aggregated data in the context of particular research problems, that is, its fitness for use. There are three critical data quality issues in networks of distributed natural science collections data, as in all scientific data: identifying and correcting errors, maintaining currency, and assessing fitness for use. To this end, we have designed and implemented a prototype network in the domain of natural science collections. This prototype is built over the open source Map-Reduce platform Hadoop with a network client in the open source collections management system Specify 6. We call this network “Filtered Push” as, at its core, annotations are pushed from the network edges to relevant authoritative repositories, where humans and software filter the annotations before accepting them as changes to the authoritative data. The Filtered Push software is a domain-neutral framework for originating, distributing, and analyzing record-level annotations. Network participants can subscribe to notifications arising from ontology-based analyses of new annotations or of purpose-built queries against the network's global history of annotations. Quality and fitness for use of distributed natural science collections data can be addressed with Filtered Push software by implementing a network that allows data providers and consumers to define potential errors in data, develop metrics for those errors, specify workflows to analyze distributed data to detect potential errors, and to close the quality management cycle by providing a network architecture to pushing assertions about data quality such as corrections back to the curators of the participating data sets. Quality issues in distributed scientific data have several things in common: (1) Statements about data quality should be regarded as hypotheses about inconsistencies between perhaps several records, data sets, or practices of science. (2) Data quality problems often cannot be detected only from internal statistical correlations or logical analysis, but may need the application of defined workflows that signal illogical output. (3) Changes in scientific theory or practice over time can result in changes of what QC tests should be applied to legacy data. (4) The frequency of some classes of error in a data set may be identifiable without the ability to assert that a particular record is in error. To address these issues requires, as does science itself, framing QC hypotheses against data that may be anywhere and may arise at any time in the future. In short, QC for science data is a never ending process. It must provide for notice to an agent (human or software) that a given dataset supports a hypothesis of inconsistency with a current scientific resource or model, or with potential generalizations of the concepts in a metadata ontology. Like quality control in general, quality control of distributed data is a repeated cyclical process. In implementing a Filtered Push network for quality control, we have a model in which the cost of QC forever is not substantially greater than QC once.
The genesis of errors in drawing.
Chamberlain, Rebecca; Wagemans, Johan
2016-06-01
The difficulty adults find in drawing objects or scenes from real life is puzzling, assuming that there are few gross individual differences in the phenomenology of visual scenes and in fine motor control in the neurologically healthy population. A review of research concerning the perceptual, motoric and memorial correlates of drawing ability was conducted in order to understand why most adults err when trying to produce faithful representations of objects and scenes. The findings reveal that accurate perception of the subject and of the drawing is at the heart of drawing proficiency, although not to the extent that drawing skill elicits fundamental changes in visual perception. Instead, the decisive role of representational decisions reveals the importance of appropriate segmentation of the visual scene and of the influence of pictorial schemas. This leads to the conclusion that domain-specific, flexible, top-down control of visual attention plays a critical role in development of skill in visual art and may also be a window into creative thinking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Applying Formal Methods to NASA Projects: Transition from Research to Practice
NASA Technical Reports Server (NTRS)
Othon, Bill
2009-01-01
NASA project managers attempt to manage risk by relying on mature, well-understood process and technology when designing spacecraft. In the case of crewed systems, the margin for error is even tighter and leads to risk aversion. But as we look to future missions to the Moon and Mars, the complexity of the systems will increase as the spacecraft and crew work together with less reliance on Earth-based support. NASA will be forced to look for new ways to do business. Formal methods technologies can help NASA develop complex but cost effective spacecraft in many domains, including requirements and design, software development and inspection, and verification and validation of vehicle subsystems. To realize these gains, the technologies must be matured and field-tested so that they are proven when needed. During this discussion, current activities used to evaluate FM technologies for Orion spacecraft design will be reviewed. Also, suggestions will be made to demonstrate value to current designers, and mature the technology for eventual use in safety-critical NASA missions.
Azadeh, A; Mokhtari, Z; Sharahi, Z Jiryaei; Zarrin, M
2015-12-01
Decision making failure is a predominant human error in emergency situations. To demonstrate the subject model, operators of an oil refinery were asked to answer a health, safety and environment HSE-decision styles (DS) questionnaire. In order to achieve this purpose, qualitative indicators in HSE and ergonomics domain have been collected. Decision styles, related to the questions, have been selected based on Driver taxonomy of human decision making approach. Teamwork efficiency has been assessed based on different decision style combinations. The efficiency has been ranked based on HSE performance. Results revealed that efficient decision styles resulted from data envelopment analysis (DEA) optimization model is consistent with the plant's dominant styles. Therefore, improvement in system performance could be achieved using the best operator for critical posts or in team arrangements. This is the first study that identifies the best decision styles with respect to HSE and ergonomics factors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evaluation of psychology practitioner competence in clinical supervision.
Gonsalvez, Craig J; Crowe, Trevor P
2014-01-01
There is a growing consensus favouring the development, advancement, and implementation of a competency-based approach for psychology training and supervision. There is wide recognition that skills, attitude-values, and relationship competencies are as critical to a psychologist's competence as are knowledge capabilities, and that these key competencies are best measured during placements, leaving the clinical supervisor in an unparalleled position of advantage to provide formative and summative evaluations on the supervisee's progression towards competence. Paradoxically, a compelling body of literature from across disciplines indicates that supervisor ratings of broad domains of competence are systematically compromised by biases, including leniency error and halo effect. The current paper highlights key issues affecting summative competency evaluations by supervisors: what competencies should be evaluated, who should conduct the evaluation, how (tools) and when evaluations should be conducted, and process variables that affect evaluation. The article concludes by providing research recommendations to underpin and promote future progress and by offering practice recommendations to facilitate a more credible and meaningful evaluation of competence and competencies.
Lippman, Sheri A.; Maman, Suzanne; MacPhail, Catherine; Twine, Rhian; Peacock, Dean; Kahn, Kathleen; Pettifor, Audrey
2013-01-01
Introduction Community mobilizing strategies are essential to health promotion and uptake of HIV prevention. However, there has been little conceptual work conducted to establish the core components of community mobilization, which are needed to guide HIV prevention programming and evaluation. Objectives We aimed to identify the key domains of community mobilization (CM) essential to change health outcomes or behaviors, and to determine whether these hypothesized CM domains were relevant to a rural South African setting. Method We studied social movements and community capacity, empowerment and development literatures, assessing common elements needed to operationalize HIV programs at a community level. After synthesizing these elements into six essential CM domains, we explored the salience of these CM domains qualitatively, through analysis of 10 key informant in-depth-interviews and seven focus groups in three villages in Bushbuckridge. Results CM domains include: 1) shared concerns, 2) critical consciousness, 3) organizational structures/networks, 4) leadership (individual and/or institutional), 5) collective activities/actions, and 6) social cohesion. Qualitative data indicated that the proposed domains tapped into theoretically consistent constructs comprising aspects of CM processes. Some domains, extracted from largely Western theory, required little adaptation for the South African context; others translated less effortlessly. For example, critical consciousness to collectively question and resolve community challenges functioned as expected. However, organizations/networks, while essential, operated differently than originally hypothesized - not through formal organizations, but through diffuse family networks. Conclusions To date, few community mobilizing efforts in HIV prevention have clearly defined the meaning and domains of CM prior to intervention design. We distilled six CM domains from the literature; all were pertinent to mobilization in rural South Africa. While some adaptation of specific domains is required, they provide an extremely valuable organizational tool to guide CM programming and evaluation of critically needed mobilizing initiatives in Southern Africa. PMID:24147121
Matsubara, Kazuo; Toyama, Akira; Satoh, Hiroshi; Suzuki, Hiroshi; Awaya, Toshio; Tasaki, Yoshikazu; Yasuoka, Toshiaki; Horiuchi, Ryuya
2011-04-01
It is obvious that pharmacists play a critical role as risk managers in the healthcare system, especially in medication treatment. Hitherto, there is not a single multicenter-survey report describing the effectiveness of clinical pharmacists in preventing medical errors from occurring in the wards in Japan. Thus, we conducted a 1-month survey to elucidate the relationship between the number of errors and working hours of pharmacists in the ward, and verified whether the assignment of clinical pharmacists to the ward would prevent medical errors between October 1-31, 2009. Questionnaire items for the pharmacists at 42 national university hospitals and a medical institute included the total and the respective numbers of medication-related errors, beds and working hours of pharmacist in 2 internal medicine and 2 surgical departments in each hospital. Regardless of severity, errors were consecutively reported to the Medical Security and Safety Management Section in each hospital. The analysis of errors revealed that longer working hours of pharmacists in the ward resulted in less medication-related errors; this was especially significant in the internal medicine ward (where a variety of drugs were used) compared with the surgical ward. However, the nurse assignment mode (nurse/inpatients ratio: 1 : 7-10) did not influence the error frequency. The results of this survey strongly indicate that assignment of clinical pharmacists to the ward is critically essential in promoting medication safety and efficacy.
NASA Astrophysics Data System (ADS)
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
Automatic QRS complex detection using two-level convolutional neural network.
Xiang, Yande; Lin, Zhitao; Meng, Jianyi
2018-01-29
The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
NASA Astrophysics Data System (ADS)
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
2017-03-01
proposed. Expected profiles can incorporate a level of overdesign. Finally, the Design Integrity measuring techniques are applied to five Test Article ...Inserted into Test System Table 2 presents the results of the analysis applied to each of the test article designs. Each of the domains are...the lowest integrities. Based on the analysis, the DI metric shows measurable differentiation between all five Test Article Error Location Error
Domain-Level Assessment of the Weather Running Estimate-Nowcast (WREN) Model
2016-11-01
Added by Decreased Grid Spacing 14 4.4 Performance Comparison of 2 WRE–N Configurations 18 4.5 Performance Comparison: Dumais WRE–N with FDDA vs. the...FDDA for 2 -m-AGL TMP (K) ..................................................... 15 Fig. 11 Bias and RMSE errors for the 3 grids for Dumais and Passner...WRE–N with FDDA for 2 -m-AGL DPT (K) ...................................................... 16 Fig. 12 Bias and RMSE errors for the 3 grids for Dumais
A Note on NCOM Temperature Forecast Error Calibration Using the Ensemble Transform
2009-01-01
Division Head Ruth H. Preller, 7300 Security, Code 1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs...problem, local unbiased (correlation) and persistent errors (bias) of the Navy Coastal Ocean Modeling (NCOM) System nested in global ocean domains, are...system were made available in real-time without performing local data assimilation, though remote sensing and global data was assimilated on the
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...
2016-09-16
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
Divergent estimation error in portfolio optimization and in linear regression
NASA Astrophysics Data System (ADS)
Kondor, I.; Varga-Haszonits, I.
2008-08-01
The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.
Changing the Culture of Academic Medicine: Critical Mass or Critical Actors?
Newbill, Sharon L.; Cardinali, Gina; Morahan, Page S.; Chang, Shine; Magrane, Diane
2017-01-01
Abstract Purpose: By 2006, women constituted 34% of academic medical faculty, reaching a critical mass. Theoretically, with critical mass, culture and policy supportive of gender equity should be evident. We explore whether having a critical mass of women transforms institutional culture and organizational change. Methods: Career development program participants were interviewed to elucidate their experiences in academic health centers (AHCs). Focus group discussions were held with institutional leaders to explore their perceptions about contemporary challenges related to gender and leadership. Content analysis of both data sources revealed points of convergence. Findings were interpreted using the theory of critical mass. Results: Two nested domains emerged: the individual domain included the rewards and personal satisfaction of meaningful work, personal agency, tensions between cultural expectations of family and academic roles, and women's efforts to work for gender equity. The institutional domain depicted the sociocultural environment of AHCs that shaped women's experience, both personally and professionally, lack of institutional strategies to engage women in organizational initiatives, and the influence of one leader on women's ascent to leadership. Conclusions: The predominant evidence from this research demonstrates that the institutional barriers and sociocultural environment continue to be formidable obstacles confronting women, stalling the transformational effects expected from achieving a critical mass of women faculty. We conclude that the promise of critical mass as a turning point for women should be abandoned in favor of “critical actor” leaders, both women and men, who individually and collectively have the commitment and power to create gender-equitable cultures in AHCs. PMID:28092473
The cerebellum for jocks and nerds alike.
Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.
The cerebellum for jocks and nerds alike
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338
NASA Astrophysics Data System (ADS)
Chen, Jing-Bo
2014-06-01
By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.
Panepinto, Julie A; Paul Scott, J; Badaki-Makun, Oluwakemi; Darbari, Deepika S; Chumpitazi, Corrie E; Airewele, Gladstone E; Ellison, Angela M; Smith-Whitley, Kim; Mahajan, Prashant; Sarnaik, Sharada A; Charles Casper, T; Cook, Larry J; Leonard, Julie; Hulbert, Monica L; Powell, Elizabeth C; Liem, Robert I; Hickey, Robert; Krishnamurti, Lakshmanan; Hillery, Cheryl A; Brousseau, David C
2017-06-12
Detecting change in health status over time and ascertaining meaningful changes are critical elements when using health-related quality of life (HRQL) instruments to measure patient-centered outcomes. The PedsQL™ Sickle Cell Disease module, a disease specific HRQL instrument, has previously been shown to be valid and reliable. Our objectives were to determine the longitudinal validity of the PedsQL™ Sickle Cell Disease module and the change in HRQL that is meaningful to patients. An ancillary study was conducted utilizing a multi-center prospective trial design. Children ages 4-21 years with sickle cell disease admitted to the hospital for an acute painful vaso-oclusive crisis were eligible. Children completed HRQL assessments at three time points (in the Emergency Department, one week post-discharge, and at return to baseline (One to three months post-discharge). The primary outcome was change in HRQL score. Both distribution (effect size, standard error of measurement (SEM)) and anchor (global change assessment) based methods were used to determine the longitudinal validity and meaningful change in HRQL. Changes in HRQL meaningful to patients were identified by anchoring the change scores to the patient's perception of global improvement in pain. Moderate effect sizes (0.20-0.80) were determined for all domains except the Communication I and Cognitive Fatigue domains. The value of 1 SEM varied from 3.8-14.6 across all domains. Over 50% of patients improved by at least 1 SEM in Total HRQL score. A HRQL change score of 7-10 in the pain domains represented minimal perceived improvement in HRQL and a HRQL change score of 18 or greater represented moderate to large improvement. The PedsQL™ Sickle Cell Disease Module is responsive to changes in HRQL in patients experiencing acute painful vaso-occlusive crises. The study data establish longitudinal validity and meaningful change parameters for the PedsQL™ Sickle Cell Disease Module. ClinicalTrials.gov (study identifier: NCT01197417 ). Date of registration: 08/30/2010.
Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport
NASA Astrophysics Data System (ADS)
Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.
2016-12-01
Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.
Adaptive critic autopilot design of bank-to-turn missiles using fuzzy basis function networks.
Lin, Chuan-Kai
2005-04-01
A new adaptive critic autopilot design for bank-to-turn missiles is presented. In this paper, the architecture of adaptive critic learning scheme contains a fuzzy-basis-function-network based associative search element (ASE), which is employed to approximate nonlinear and complex functions of bank-to-turn missiles, and an adaptive critic element (ACE) generating the reinforcement signal to tune the associative search element. In the design of the adaptive critic autopilot, the control law receives signals from a fixed gain controller, an ASE and an adaptive robust element, which can eliminate approximation errors and disturbances. Traditional adaptive critic reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment, however, the proposed tuning algorithm can significantly shorten the learning time by online tuning all parameters of fuzzy basis functions and weights of ASE and ACE. Moreover, the weight updating law derived from the Lyapunov stability theory is capable of guaranteeing both tracking performance and stability. Computer simulation results confirm the effectiveness of the proposed adaptive critic autopilot.
Neural evidence for description dependent reward processing in the framing effect
Yu, Rongjun; Zhang, Ping
2014-01-01
Human decision making can be influenced by emotionally valenced contexts, known as the framing effect. We used event-related brain potentials to investigate how framing influences the encoding of reward. We found that the feedback related negativity (FRN), which indexes the “worse than expected” negative prediction error in the anterior cingulate cortex (ACC), was more negative for the negative frame than for the positive frame in the win domain. Consistent with previous findings that the FRN is not sensitive to “better than expected” positive prediction error, the FRN did not differentiate the positive and negative frame in the loss domain. Our results provide neural evidence that the description invariance principle which states that reward representation and decision making are not influenced by how options are presented is violated in the framing effect. PMID:24733998
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less
A Note on a Sampling Theorem for Functions over GF(q)n Domain
NASA Astrophysics Data System (ADS)
Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi
In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.
ERIC Educational Resources Information Center
Byrne, Donn; And Others
The task group report presented in this publication is one of a series prepared by eminent psychologists who have served as consultants in the U.S.O.E.-sponsored grant study to conduct a Critical Appraisal of the Personality-Emotions-Motivation-Domain. In order to achieve the goal of identifying important problems and areas for new research and…
ERIC Educational Resources Information Center
Gorsuch, Richard L.; And Others
The task group report presented in this publication is one of a series prepared by eminent psychologists who have served as consultants in the U.S. Office of Education-sponsored grant study to conduct a Critical Appraisal of the Personality-Emotions-Motivation Domain. In order to achieve the goal of identifying important problems and areas for new…
A short note on the mean exit time of the Brownian motion
NASA Astrophysics Data System (ADS)
Cadeddu, Lucio; Farina, Maria Antonietta
We investigate the functional Ω↦ℰ(Ω) where Ω runs through the set of compact domains of fixed volume v in any Riemannian manifold (M,g) and where ℰ(Ω) is the mean exit time from Ω of the Brownian motion. We give an alternative analytical proof of a well-known fact on its critical points proved by McDonald: the critical points of ℰ(Ω) are harmonic domains.
Raper, Steven E; Resnick, Andrew S; Morris, Jon B
2014-01-01
Surgery residents are expected to demonstrate the ability to communicate with patients, families, and the public in a wide array of settings on a wide variety of issues. One important setting in which residents may be required to communicate with patients is in the disclosure of medical error. This article details one approach to developing a course in the disclosure of medical errors by residents. Before the development of this course, residents had no education in the skills necessary to disclose medical errors to patients. Residents viewed a Web-based video didactic session and associated slide deck and then were filmed disclosing a wrong-site surgery to a standardized patient (SP). The filmed encounter was reviewed by faculty, who then along with the SP scored each encounter (5-point Likert scale) over 10 domains of physician-patient communication. The residents received individualized written critique, the numerical analysis of their individual scenario, and an opportunity to provide feedback over a number of domains. A mean score of 4.00 or greater was considered satisfactory. Faculty and SP assessments were compared with Student t test. Residents were filmed in a one-on-one scenario in which they had to disclose a wrong-site surgery to a SP in a Simulation Center. A total of 12 residents, shortly to enter the clinical postgraduate year 4, were invited to participate, as they will assume service leadership roles. All were finishing their laboratory experiences, and all accepted the invitation. Residents demonstrated satisfactory competence in 4 of the 10 domains assessed by the course faculty. There were significant differences in the perceptions of the faculty and SP in 5 domains. The residents found this didactic, simulated experience of value (Likert score ≥4 in 5 of 7 domains assessed in a feedback tool). Qualitative feedback from the residents confirmed the realistic feel of the encounter and other impressions. We were able to quantitatively demonstrate both competency and opportunities for improvement across a wide range of domains of interpersonal and communication skills. Residents are expected to communicate effectively with patients, families, and the public, as appropriate, across a broad range of socioeconomic and cultural backgrounds. As academic surgeons, we must be mindful of our roles as teachers, mentors, and coaches by teaching good communication skills to our residents. Courses such as the one described here can help in improving physician-patient communication. The differing perspectives of faculty and SPs regarding resident performance warrants further study. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Visual feedback system to reduce errors while operating roof bolting machines
Steiner, Lisa J.; Burgess-Limerick, Robin; Eiter, Brianna; Porter, William; Matty, Tim
2015-01-01
Problem Operators of roof bolting machines in underground coal mines do so in confined spaces and in very close proximity to the moving equipment. Errors in the operation of these machines can have serious consequences, and the design of the equipment interface has a critical role in reducing the probability of such errors. Methods An experiment was conducted to explore coding and directional compatibility on actual roof bolting equipment and to determine the feasibility of a visual feedback system to alert operators of critical movements and to also alert other workers in close proximity to the equipment to the pending movement of the machine. The quantitative results of the study confirmed the potential for both selection errors and direction errors to be made, particularly during training. Results Subjective data confirmed a potential benefit of providing visual feedback of the intended operations and movements of the equipment. Impact This research may influence the design of these and other similar control systems to provide evidence for the use of warning systems to improve operator situational awareness. PMID:23398703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis
This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.
Scientific Criticism? A Critical Approach to the Resistive Audience.
ERIC Educational Resources Information Center
Ruddock, Andy
1998-01-01
Contends that critical audience research has resisted "scientific" discourses that appear positivist. States that recent research begins to show the same errors as earlier positivist style--re-emergence of debates on political economy and cultural imperialism are aimed at overturning what are seen as orthodoxies of opposition and…
Preparing Coaches for the Changing Game of Science: Teaching in Multiple Domains.
ERIC Educational Resources Information Center
Dass, Pradeep M.
2000-01-01
Argues that traditional methods of science instruction are being supplanted by a broader approach that enhances understanding of the nature of science and teaches students to critically analyze scientific information. Discusses six domains of science to be included in good science instruction. Discusses ways new teachers can put those domains into…
A Catalogue of Concepts in the Pedagogical Domain of Teacher Education.
ERIC Educational Resources Information Center
Multi-State Consortium on Performance-Based Teacher Education, Albany, NY.
This catalog of concepts in the pedagogical domain of teacher education organizes the critical concepts and provides definitions, indicators, and illustrations of the concepts. Chapter 1 presents a rationale for the selection of concepts in teacher education and discusses pedagogical domain, interactive teaching, the format of concepts in this…
Domain Specificity and Generality of Epistemic Cognitions: Issues in Assessment
ERIC Educational Resources Information Center
Owen, Jesse J.
2011-01-01
As administers in higher education search for learning outcome measures, the assessment of epistemic cognitions, or how students critically think and reason about real-world issues, is paramount. The current study examined if students' expertise in a domain of study (i.e., domain specificity) influenced their scores on an empirically supported…
Fourier/Chebyshev methods for the incompressible Navier-Stokes equations in finite domains
NASA Technical Reports Server (NTRS)
Corral, Roque; Jimenez, Javier
1992-01-01
A fully spectral numerical scheme for the incompressible Navier-Stokes equations in domains which are infinite or semi-infinite in one dimension. The domain is not mapped, and standard Fourier or Chebyshev expansions can be used. The handling of the infinite domain does not introduce any significant overhead. The scheme assumes that the vorticity in the flow is essentially concentrated in a finite region, which is represented numerically by standard spectral collocation methods. To accomodate the slow exponential decay of the velocities at infinity, extra expansion functions are introduced, which are handled analytically. A detailed error analysis is presented, and two applications to Direct Numerical Simulation of turbulent flows are discussed in relation with the numerical performance of the scheme.
Filipino, Indonesian and Thai Listening Test Errors
ERIC Educational Resources Information Center
Castro, C. S.; And Others
1975-01-01
This article reports on a study to identify listening, and aural comprehension difficulties experienced by students of English, specifically RELC (Regional English Language Centre in Singapore) course members. The most critical errors are discussed and conclusions about foreign language learning are drawn. (CLK)
An Instructor's Diagnostic Aid for Feedback in Training.
ERIC Educational Resources Information Center
Andrews, Dee H.; Uliano, Kevin C.
1988-01-01
Instructor's Diagnostic Aid for Feedback in Training (IDAFT) is a computer-assisted method based on error analysis, domains of learning, and events of instruction. Its use with Navy team instructors is currently being explored. (JOW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz
We present the results of the estimation of parameters with LISA for nearly monochromatic gravitational waves in the low and high frequency regimes for the time-delay interferometry response. Angular resolution of the detector and the estimation errors of the signal's parameters in the high frequency regimes are calculated as functions of the position in the sky and as functions of the frequency. For the long-wavelength domain we give compact formulas for the estimation errors valid on a wide range of the parameter space.
Frequency-domain gravitational waveform models for inspiraling binary neutron stars
NASA Astrophysics Data System (ADS)
Kawaguchi, Kyohei; Kiuchi, Kenta; Kyutoku, Koutarou; Sekiguchi, Yuichiro; Shibata, Masaru; Taniguchi, Keisuke
2018-02-01
We develop a model for frequency-domain gravitational waveforms from inspiraling binary neutron stars. Our waveform model is calibrated by comparison with hybrid waveforms constructed from our latest high-precision numerical-relativity waveforms and the SEOBNRv2T waveforms in the frequency range of 10-1000 Hz. We show that the phase difference between our waveform model and the hybrid waveforms is always smaller than 0.1 rad for the binary tidal deformability Λ ˜ in the range 300 ≲Λ ˜ ≲1900 and for a mass ratio between 0.73 and 1. We show that, for 10-1000 Hz, the distinguishability for the signal-to-noise ratio ≲50 and the mismatch between our waveform model and the hybrid waveforms are always smaller than 0.25 and 1.1 ×10-5 , respectively. The systematic error of our waveform model in the measurement of Λ ˜ is always smaller than 20 with respect to the hybrid waveforms for 300 ≲Λ ˜≲1900 . The statistical error in the measurement of binary parameters is computed employing our waveform model, and we obtain results consistent with the previous studies. We show that the systematic error of our waveform model is always smaller than 20% (typically smaller than 10%) of the statistical error for events with a signal-to-noise ratio of 50.
NASA Technical Reports Server (NTRS)
Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.
1984-01-01
The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
Frequency-domain optical absorption spectroscopy of finite tissue volumes using diffusion theory.
Pogue, B W; Patterson, M S
1994-07-01
The goal of frequency-domain optical absorption spectroscopy is the non-invasive determination of the absorption coefficient of a specific tissue volume. Since this allows the concentration of endogenous and exogenous chromophores to be calculated, there is considerable potential for clinical application. The technique relies on the measurement of the phase and modulation of light, which is diffusely reflected or transmitted by the tissue when it is illuminated by an intensity-modulated source. A model of light propagation must then be used to deduce the absorption coefficient. For simplicity, it is usual to assume the tissue is either infinite in extent (for transmission measurements) or semi-infinite (for reflectance measurements). The goal of this paper is to examine the errors introduced by these assumptions when measurements are actually performed on finite volumes. Diffusion-theory calculations and experimental measurements were performed for slabs, cylinders and spheres with optical properties characteristic of soft tissues in the near infrared. The error in absorption coefficient is presented as a function of object size as a guideline to when the simple models may be used. For transmission measurements, the error is almost independent of the true absorption coefficient, which allows absolute changes in absorption to be measured accurately. The implications of these errors in absorption coefficient for two clinical problems--quantitation of an exogenous photosensitizer and measurement of haemoglobin oxygenation--are presented and discussed.
Nonconvergence of the Wang-Landau algorithms with multiple random walkers.
Belardinelli, R E; Pereyra, V D
2016-05-01
This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.
Mathematics skills in good readers with hydrocephalus.
Barnes, Marcia A; Pengelly, Sarah; Dennis, Maureen; Wilkinson, Margaret; Rogers, Tracey; Faulkner, Heather
2002-01-01
Children with hydrocephalus have poor math skills. We investigated the nature of their arithmetic computation errors by comparing written subtraction errors in good readers with hydrocephalus, typically developing good readers of the same age, and younger children matched for math level to the children with hydrocephalus. Children with hydrocephalus made more procedural errors (although not more fact retrieval or visual-spatial errors) than age-matched controls; they made the same number of procedural errors as younger, math-level matched children. We also investigated a broad range of math abilities, and found that children with hydrocephalus performed more poorly than age-matched controls on tests of geometry and applied math skills such as estimation and problem solving. Computation deficits in children with hydrocephalus reflect delayed development of procedural knowledge. Problems in specific math domains such as geometry and applied math, were associated with deficits in constituent cognitive skills such as visual spatial competence, memory, and general knowledge.
Devitt, Aleea L.; Tippett, Lynette; Schacter, Daniel L.; Addis, Donna Rose
2016-01-01
Because of its reconstructive nature, autobiographical memory (AM) is subject to a range of distortions. One distortion involves the erroneous incorporation of features from one episodic memory into another, forming what are known as memory conjunction errors. Healthy aging has been associated with an enhanced susceptibility to conjunction errors for laboratory stimuli, yet it is unclear whether these findings translate to the autobiographical domain. We investigated the impact of aging on vulnerability to AM conjunction errors, and explored potential cognitive processes underlying the formation of these errors. An imagination recombination paradigm was used to elicit AM conjunction errors in young and older adults. Participants also completed a battery of neuropsychological tests targeting relational memory and inhibition ability. Consistent with findings using laboratory stimuli, older adults were more susceptible to AM conjunction errors than younger adults. However, older adults were not differentially vulnerable to the inflating effects of imagination. Individual variation in AM conjunction error vulnerability was attributable to inhibitory capacity. An inability to suppress the cumulative familiarity of individual AM details appears to contribute to the heightened formation of AM conjunction errors with age. PMID:27929343
Pauli, Wolfgang M.; Larsen, Tobias; Tyszka, J. Michael; O’Doherty, John P.
2017-01-01
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models—namely, “actor/critic” models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning. PMID:29049406
Assessing Working Memory in Mild Cognitive Impairment with Serial Order Recall.
Emrani, Sheina; Libon, David J; Lamar, Melissa; Price, Catherine C; Jefferson, Angela L; Gifford, Katherine A; Hohman, Timothy J; Nation, Daniel A; Delano-Wood, Lisa; Jak, Amy; Bangen, Katherine J; Bondi, Mark W; Brickman, Adam M; Manly, Jennifer; Swenson, Rodney; Au, Rhoda
2018-01-01
Working memory (WM) is often assessed with serial order tests such as repeating digits backward. In prior dementia research using the Backward Digit Span Test (BDT), only aggregate test performance was examined. The current research tallied primacy/recency effects, out-of-sequence transposition errors, perseverations, and omissions to assess WM deficits in patients with mild cognitive impairment (MCI). Memory clinic patients (n = 66) were classified into three groups: single domain amnestic MCI (aMCI), combined mixed domain/dysexecutive MCI (mixed/dys MCI), and non-MCI where patients did not meet criteria for MCI. Serial order/WM ability was assessed by asking participants to repeat 7 trials of five digits backwards. Serial order position accuracy, transposition errors, perseverations, and omission errors were tallied. A 3 (group)×5 (serial position) repeated measures ANOVA yielded a significant group×trial interaction. Follow-up analyses found attenuation of the recency effect for mixed/dys MCI patients. Mixed/dys MCI patients scored lower than non-MCI patients for serial position 3 (p < 0.003) serial position 4 (p < 0.002); and lower than both group for serial position 5 (recency; p < 0.002). Mixed/dys MCI patients also produced more transposition errors than both groups (p < 0.010); and more omissions (p < 0.020), and perseverations errors (p < 0.018) than non-MCI patients. The attenuation of a recency effect using serial order parameters obtained from the BDT may provide a useful operational definition as well as additional diagnostic information regarding working memory deficits in MCI.
Forecasting space weather over short horizons: Revised and updated estimates
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2018-07-01
Space weather reflects multiple causes. There is a clear influence for the sun on the near-earth environment. Solar activity shows evidence of chaotic properties, implying that prediction may be limited beyond short horizons. At the same time, geomagnetic activity also reflects the rotation of the earth's core, and local currents in the ionosphere. The combination of influences means that geomagnetic indexes behave like multifractals, exhibiting nonlinear variability, with intermittent outliers. This study tests a range of models: regressions, neural networks, and a frequency domain algorithm. Forecasting tests are run for sunspots and irradiance from 1820 onward, for the Aa geomagnetic index from 1868 onward, and the Am index from 1959 onward, over horizons of 1-7 days. For irradiance and sunspots, persistence actually does better over short horizons. None of the other models really dominate. For the geomagnetic indexes, the persistence method does badly, while the neural net also shows large errors. The remaining models all achieve about the same level of accuracy. The errors are in the range of 48% at 1 day, and 54% at all later horizons. Additional tests are run over horizons of 1-4 weeks. At 1 week, the best models reduce the error to about 35%. Over horizons of four weeks, the model errors increase. The findings are somewhat pessimistic. Over short horizons, geomagnetic activity exhibits so much random variation that the forecast errors are extremely high. Over slightly longer horizons, there is some improvement from estimating in the frequency domain, but not a great deal. Including solar activity in the models does not yield any improvement in accuracy.
Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek
2016-07-01
Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.
Comparison of Errors Using Two Length-Based Tape Systems for Prehospital Care in Children.
Rappaport, Lara D; Brou, Lina; Givens, Tim; Mandt, Maria; Balakas, Ashley; Roswell, Kelley; Kotas, Jason; Adelgais, Kathleen M
2016-01-01
The use of a length/weight-based tape (LBT) for equipment size and drug dosing for pediatric patients is recommended in a joint statement by multiple national organizations. A new system, known as Handtevy™, allows for rapid determination of critical drug doses without performing calculations. To compare two LBT systems for dosing errors and time to medication administration in simulated prehospital scenarios. This was a prospective randomized trial comparing the Broselow Pediatric Emergency Tape™ (Broselow) and Handtevy LBT™ (Handtevy). Paramedics performed 2 pediatric simulations: cardiac arrest with epinephrine administration and hypoglycemia mandating dextrose. Each scenario was repeated utilizing both systems with a 1-year-old and 5-year-old size manikin. Facilitators recorded identified errors and time points of critical actions including time to medication. We enrolled 80 paramedics, performing 320 simulations. For Dextrose, there were significantly more errors with Broselow (63.8%) compared to Handtevy (13.8%) and time to administration was longer with the Broselow system (220 seconds vs. 173 seconds). For epinephrine, the LBTs were similar in overall error rate (Broselow 21.3% vs. Handtevy 16.3%) and time to administration (89 vs. 91 seconds). Cognitive errors were more frequent when using the Broselow compared to Handtevy, particularly with dextrose administration. The frequency of procedural errors was similar between the two LBT systems. In simulated prehospital scenarios, use of the Handtevy LBT system resulted in fewer errors for dextrose administration compared to the Broselow LBT, with similar time to administration and accuracy of epinephrine administration.
Wang, Suyue; Veldman, Geertruida M; Stahl, Mark; Xing, Yuzhe; Tobin, James F; Erbe, David V
2002-09-02
Antagonists of the B7 family of co-stimulatory molecules have the potential for altering immune responses therapeutically. To better define the requirements for such inhibitors, we have mapped the binding of an entire panel of blocking antibodies specific for human B7.1. By mutagenesis, each of the residues critical for blocking antibody binding appeared to fall entirely within the N-terminal V-set domain of B7.1. Thus, although antibody-antigen interacting surfaces can be quite large, these results indicate that a relatively small portion of the GFCC'C" face of this domain is crucial for further antagonist development.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Matsuda, F; Lan, W C; Tanimura, R
1999-02-01
In Matsuda's 1996 study, 4- to 11-yr.-old children (N = 133) watched two cars running on two parallel tracks on a CRT display and judged whether their durations and distances were equal and, if not, which was larger. In the present paper, the relative contributions of the four critical stimulus attributes (whether temporal starting points, temporal stopping points, spatial starting points, and spatial stopping points were the same or different between two cars) to the production of errors were quantitatively estimated based on the data for rates of errors obtained by Matsuda. The present analyses made it possible not only to understand numerically the findings about qualitative characteristics of the critical attributes described by Matsuda, but also to add more detailed findings about them.
An analysis of pilot error-related aircraft accidents
NASA Technical Reports Server (NTRS)
Kowalsky, N. B.; Masters, R. L.; Stone, R. B.; Babcock, G. L.; Rypka, E. W.
1974-01-01
A multidisciplinary team approach to pilot error-related U.S. air carrier jet aircraft accident investigation records successfully reclaimed hidden human error information not shown in statistical studies. New analytic techniques were developed and applied to the data to discover and identify multiple elements of commonality and shared characteristics within this group of accidents. Three techniques of analysis were used: Critical element analysis, which demonstrated the importance of a subjective qualitative approach to raw accident data and surfaced information heretofore unavailable. Cluster analysis, which was an exploratory research tool that will lead to increased understanding and improved organization of facts, the discovery of new meaning in large data sets, and the generation of explanatory hypotheses. Pattern recognition, by which accidents can be categorized by pattern conformity after critical element identification by cluster analysis.
NASA Astrophysics Data System (ADS)
Cooper, Elizabeth; Dance, Sarah; Garcia-Pintado, Javier; Nichols, Nancy; Smith, Polly
2017-04-01
Timely and accurate inundation forecasting provides vital information about the behaviour of fluvial flood water, enabling mitigating actions to be taken by residents and emergency services. Data assimilation is a powerful mathematical technique for combining forecasts from hydrodynamic models with observations to produce a more accurate forecast. We discuss the effect of both domain size and channel friction parameter estimation on observation impact in data assimilation for inundation forecasting. Numerical shallow water simulations are carried out in a simple, idealized river channel topography. Data assimilation is performed using an Ensemble Transform Kalman Filter (ETKF) and synthetic observations of water depth in identical twin experiments. We show that reinitialising the numerical inundation model with corrected water levels after an assimilation can cause an initialisation shock if a hydrostatic assumption is made, leading to significant degradation of the forecast for several hours immediately following an assimilation. We demonstrate an effective and novel method for dealing with this. We find that using data assimilation to combine observations of water depth with forecasts from a hydrodynamic model corrects the forecast very effectively at time of the observations. In agreement with other authors we find that the corrected forecast then moves quickly back to the open loop forecast which does not take the observations into account. Our investigations show that the time taken for the forecast to decay back to the open loop case depends on the length of the domain of interest when only water levels are corrected. This is because the assimilation corrects water depths in all parts of the domain, even when observations are only available in one area. Error growth in the forecast step then starts at the upstream part of the domain and propagates downstream. The impact of the observations is therefore longer-lived in a longer domain. We have found that the upstream-downstream pattern of error growth can be due to incorrect friction parameter specification, rather than errors in inflow as shown elsewhere. Our results show that joint state-parameter estimation can recover accurate values for the parameter controlling channel friction processes in the model, even when observations of water level are only available on part of the flood plain. Correcting water levels and the channel friction parameter together leads to a large improvement in the forecast water levels at all simulation times. The impact of the observations is therefore much greater when the channel friction parameter is corrected along with water levels. We find that domain length effects disappear for joint state-parameter estimation.
Canards and black swans in a model of a 3-D autocatalator
NASA Astrophysics Data System (ADS)
Shchepakina, E.
2005-01-01
The mathematical model of a 3-D autocatalator is studied using the geometric theory of singular perturbations, namely, the black swan and canard techniques. Critical regimes are modeled by canards (one-dimensional stable-unstable slow integral manifolds). The meaning of criticality here is as follows. The critical regime corresponds to a chemical reaction which separates the domain of self-accelerating reactions from the domain of slow reactions. A two-dimensional stable-unstable slow integral manifold (black swan) consisting entirely of canards, which simulate the critical phenomena for different initial data of the dynamical system, is constructed. It is shown that this procedure leads to the phenomenon of auto-oscillations in the chemical system. The geometric approach combined with asymptotic and numerical methods permits us to explain the strong parametric sensitivity and to obtain asymptotic representations of the critical behavior of the chemical system.
Frequency domain measurement systems
NASA Technical Reports Server (NTRS)
Eischer, M. C.
1978-01-01
Stable frequency sources and signal processing blocks were characterized by their noise spectra, both discrete and random, in the frequency domain. Conventional measures are outlined, and systems for performing the measurements are described. Broad coverage of system configurations which were found useful is given. Their functioning and areas of application are discussed briefly. Particular attention is given to some of the potential error sources in the measurement procedures, system configurations, double-balanced-mixer-phase-detectors, and application of measuring instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabin, Charles; Plevka, Pavel, E-mail: pavel.plevka@ceitec.muni.cz
Molecular replacement and noncrystallographic symmetry averaging were used to detwin a data set affected by perfect hemihedral twinning. The noncrystallographic symmetry averaging of the electron-density map corrected errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. Hemihedral twinning is a crystal-growth anomaly in which a specimen is composed of two crystal domains that coincide with each other in three dimensions. However, the orientations of the crystal lattices in the two domains differ in a specific way. In diffraction data collected from hemihedrally twinned crystals, each observed intensity contains contributions from both of themore » domains. With perfect hemihedral twinning, the two domains have the same volumes and the observed intensities do not contain sufficient information to detwin the data. Here, the use of molecular replacement and of noncrystallographic symmetry (NCS) averaging to detwin a 2.1 Å resolution data set for Aichi virus 1 affected by perfect hemihedral twinning is described. The NCS averaging enabled the correction of errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. The procedure permitted the structure to be determined from a molecular-replacement model that had 16% sequence identity and a 1.6 Å r.m.s.d. for C{sup α} atoms in comparison to the crystallized structure. The same approach could be used to solve other data sets affected by perfect hemihedral twinning from crystals with NCS.« less
FLIR Common Module Design Manual. Revision 1
1978-03-01
degrade off-axis. The afocal assem- bly is very critical to system performance and normally constitutes a signif- icant portion of the system...not significantly degrade the performance at 10 lp/mm because chromatic errors are about 1/2 of the diffraction error. The chromatic errors are... degradation , though only 3 percent, is unavoidable. It is caused by field curvature in the galilean afocal assembly. This field curvature is
Formal Validation of Aerospace Software
NASA Astrophysics Data System (ADS)
Lesens, David; Moy, Yannick; Kanig, Johannes
2013-08-01
Any single error in critical software can have catastrophic consequences. Even though failures are usually not advertised, some software bugs have become famous, such as the error in the MIM-104 Patriot. For space systems, experience shows that software errors are a serious concern: more than half of all satellite failures from 2000 to 2003 involved software. To address this concern, this paper addresses the use of formal verification of software developed in Ada.
Panunzio, Michele F.; Antoniciello, Antonietta; Pisano, Alessandra; Rosa, Giovanna
2007-01-01
With respect to food safety, many works have studied the effectiveness of self-monitoring plans of food companies, designed using the Hazard Analysis and Critical Control Point (HACCP) method. On the other hand, in-depth research has not been made concerning the adherence of the plans to HACCP standards. During our research, we evaluated 116 self-monitoring plans adopted by food companies located in the territory of the Local Health Authority (LHA) of Foggia, Italy. The general errors (terminology, philosophy and redundancy) and the specific errors (transversal plan, critical limits, hazard specificity, and lack of procedures) were standardized. Concerning the general errors, terminological errors pertain to half the plans examined, 47% include superfluous elements and 60% have repetitive subjects. With regards to the specific errors, 77% of the plans examined contained specific errors. The evaluation has pointed out the lack of comprehension of the HACCP system by the food companies and has allowed the Servizio di Igiene degli Alimenti e della Nutrizione (Food and Nutrition Health Service), in its capacity as a control body, to intervene with the companies in order to improve designing HACCP plans. PMID:17911662
Fennell, B J; Darmanin-Sheehan, A; Hufton, S E; Calabro, V; Wu, L; Müller, M R; Cao, W; Gill, D; Cunningham, O; Finlay, W J J
2010-07-09
The shark antigen-binding V(NAR) domain has the potential to provide an attractive alternative to traditional biotherapeutics based on its small size, advantageous physiochemical properties, and unusual ability to target clefts in enzymes or cell surface molecules. The V(NAR) shares many of the properties of the well-characterised single-domain camelid V(H)H but is much less understood at the molecular level. We chose the hen-egg-lysozyme-specific archetypal Type I V(NAR) 5A7 and used ribosome display in combination with error-prone mutagenesis to interrogate the entire sequence space. We found a high level of mutational plasticity across the V(NAR) domain, particularly within the framework 2 and hypervariable region 2 regions. A number of residues important for affinity were identified, and a triple mutant combining A1D, S61R, and G62R resulted in a K(D) of 460 pM for hen egg lysozyme, a 20-fold improvement over wild-type 5A7, and the highest K(D) yet reported for V(NAR)-antigen interactions. These findings were rationalised using structural modelling and indicate the importance of residues outside the classical complementarity determining regions in making novel antigen contacts that modulate affinity. We also located two solvent-exposed residues (G15 and G42), distant from the V(NAR) paratope, which retain function upon mutation to cysteine and have the potential to be exploited as sites for targeted covalent modification. Our findings with 5A7 were extended to all known NAR structures using an in-depth bioinformatic analysis of sequence data available in the literature and a newly generated V(NAR) database. This study allowed us to identify, for the first time, both V(NAR)-specific and V(NAR)/Ig V(L)/TCR V(alpha) overlapping hallmark residues, which are critical for the structural and functional integrity of the single domain. Intriguingly, each of our designated V(NAR)-specific hallmarks align precisely with previously defined mutational 'cold spots' in natural nurse shark cDNA sequences. These findings will aid future V(NAR) engineering and optimisation studies towards the development of V(NAR) single-domain proteins as viable biotherapeutics. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Do College Students Notice Errors in Evidence When Critically Evaluating Research Findings?
ERIC Educational Resources Information Center
Rodriguez, Fernando; Ng, Annalyn; Shah, Priti
2016-01-01
The authors examined college students' ability to critically evaluate scientific evidence, specifically, whether first- and second-year students noticed when poor interpretations were drawn from research evidence. Fifty students evaluated a set of eight psychological studies, first in an informal context, then again in a critical-thinking context.…
New Developments in Error Detection and Correction Strategies for Critical Applications
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Ken
2016-01-01
The presentation will cover a variety of mitigation strategies that were developed for critical applications. An emphasis is placed on strengths and weaknesses per mitigation technique as it pertains to different FPGA device types.
Approaches to Learning and School Readiness in Head Start: Applications to Preschool Science
ERIC Educational Resources Information Center
Bustamante, Andres S.; White, Lisa J.; Greenfield, Daryl B.
2017-01-01
Approaches to learning are a set of domain-general skills that encompass curiosity, persistence, planning, and engagement in group learning. These skills play a key role in preschoolers' learning and predict school readiness in math and language. Preschool science is a critical domain for early education and facilitates learning across domains.…
The CFS-PML in numerical simulation of ATEM
NASA Astrophysics Data System (ADS)
Zhao, Xuejiao; Ji, Yanju; Qiu, Shuo; Guan, Shanshan; Wu, Yanqi
2017-01-01
In the simulation of airborne transient electromagnetic method (ATEM) in time-domain, the truncated boundary reflection can bring a big error to the results. The complex frequency shifted perfectly matched layer (CFS-PML) absorbing boundary condition has been proved to have a better absorption of low frequency incident wave and can reduce the late reflection greatly. In this paper, we apply the CFS-PML to three-dimensional numerical simulation of ATEM in time-domain to achieve a high precision .The expression of divergence equation in CFS-PML is confirmed and its explicit iteration format based on the finite difference method and the recursive convolution technique is deduced. Finally, we use the uniformity half space model and the anomalous model to test the validity of this method. Results show that the CFS-PML can reduce the average relative error to 2.87% and increase the accuracy of the anomaly recognition.
Random mutagenesis of BoNT/E Hc nanobody to construct a secondary phage-display library.
Shahi, B; Mousavi Gargari, S L; Rasooli, I; Rajabi Bazl, M; Hoseinpoor, R
2014-08-01
To construct secondary mutant phage-display library of recombinant single variable domain (VHH) against botulinum neurotoxin E by error-prone PCR. The gene coding for specific VHH derived from the camel immunized with binding domain of botulinum neurotoxin E (BoNT/E) was amplified by error-prone PCR. Several biopanning rounds were used to screen the phage-displaying BoNT/E Hc nanobodies. The final nanobody, SHMR4, with increased affinity recognized BoNT/E toxin with no cross-reactivity with other antigens especially with related BoNT toxins. The constructed nanobody could be a suitable candidate for VHH-based biosensor production to detect the Clostridium botulinum type E. Diagnosis and treatment of botulinum neurotoxins are important. Generation of high-affinity antibodies based on the construction of secondary libraries using affinity maturation step leads to the development of reagents for precise diagnosis and therapy. © 2014 The Society for Applied Microbiology.
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
NASA Astrophysics Data System (ADS)
Han, Jianguang; Wang, Yun; Yu, Changqing; Chen, Peng
2017-02-01
An approach for extracting angle-domain common-image gathers (ADCIGs) from anisotropic Gaussian beam prestack depth migration (GB-PSDM) is presented in this paper. The propagation angle is calculated in the process of migration using the real-value traveltime information of Gaussian beam. Based on the above, we further investigate the effects of anisotropy on GB-PSDM, where the corresponding ADCIGs are extracted to assess the quality of migration images. The test results of the VTI syncline model and the TTI thrust sheet model show that anisotropic parameters ɛ, δ, and tilt angle 𝜃, have a great influence on the accuracy of the migrated image in anisotropic media, and ignoring any one of them will cause obvious imaging errors. The anisotropic GB-PSDM with the true anisotropic parameters can obtain more accurate seismic images of subsurface structures in anisotropic media.
Complex phase error and motion estimation in synthetic aperture radar imaging
NASA Astrophysics Data System (ADS)
Soumekh, M.; Yang, H.
1991-06-01
Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Nonlinear Dynamic Characteristics of the Railway Vehicle
NASA Astrophysics Data System (ADS)
Uyulan, Çağlar; Gokasan, Metin
2017-06-01
The nonlinear dynamic characteristics of a railway vehicle are checked into thoroughly by applying two different wheel-rail contact model: a heuristic nonlinear friction creepage model derived by using Kalker 's theory and Polach model including dead-zone clearance. This two models are matched with the quasi-static form of the LuGre model to obtain more realistic wheel-rail contact model. LuGre model parameters are determined using nonlinear optimization method, which it's objective is to minimize the error between the output of the Polach and Kalker model and quasi-static LuGre model for specific operating conditions. The symmetric/asymmetric bifurcation attitude and stable/unstable motion of the railway vehicle in the presence of nonlinearities which are yaw damping forces in the longitudinal suspension system are analyzed in great detail by changing the vehicle speed. Phase portraits of the lateral displacement of the leading wheelset of the railway vehicle are drawn below and on the critical speeds, where sub-critical Hopf bifurcation take place, for two wheel-rail contact model. Asymmetric periodic motions have been observed during the simulation in the lateral displacement of the wheelset under different vehicle speed range. The coexistence of multiple steady states cause bounces in the amplitude of vibrations, resulting instability problems of the railway vehicle. By using Lyapunov's indirect method, the critical hunting speeds are calculated with respect to the radius of the curved track parameter changes. Hunting, which is defined as the oscillation of the lateral displacement of wheelset with a large domain, is described by a limit cycle-type oscillation nature. The evaluated accuracy of the LuGre model adopted from Kalker's model results for prediction of critical speed is higher than the results of the LuGre model adopted from Polach's model. From the results of the analysis, the critical hunting speed must be resolved by investigating the track tests under various kind of excitations.
Lee as Critical Thinker: The Example of the Gettysburg Campaign
2012-05-04
well as what should have been done if the critical thinking process had been conducted appropriately. Conclusion: Several human and military...of reasoning that make up the cognitive decision making process .6 The critical thinking elements of the model (Clarify Concern, Point of View...Finally, there are three remaining biases, traps, and errors that can negatively affect the critical thinking process . A confirmation trap describes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Qianlong; Blissard, Gary W.; Liu, Tong-Xian
The Autographa californica multiple nucleopolyhedrovirus GP64 is a class III viral fusion protein. Although the post-fusion structure of GP64 has been solved, its pre-fusion structure and the detailed mechanism of conformational change are unknown. In GP64, domain V is predicted to interact with two domain I segments that flank fusion loop 2. To evaluate the significance of the amino acids involved in these interactions, we examined 24 amino acid positions that represent interacting and conserved residues within domains I and V. In several cases, substitution of a single amino acid involved in a predicted interaction disrupted membrane fusion activity, butmore » no single amino acid pair appears to be absolutely required. We identified 4 critical residues in domain V (G438, W439, T452, and T456) that are important for membrane fusion, and two residues (G438 and W439) that appear to be important for formation or stability of the pre-fusion conformation of GP64. - Highlights: • The baculovirus envelope glycoprotein GP64 is a class III viral fusion protein. • The detailed mechanism of conformational change of GP64 is unknown. • We analyzed 24 positions that might stabilize the post-fusion structure of GP64. • We identified 4 residues in domain V that were critical for membrane fusion. • Two residues are critical for formation of the pre-fusion conformation of GP64.« less
ERIC Educational Resources Information Center
Cartwright, Desmond S.; And Others
The task group report presented in this publication is one of a series prepared by eminent psychologists who have served as consultants in the U.S.O.E.-sponsored grant study to conduct a Critical Appraisal of the Personality-Emotions-Motivation Domain. In order to achieve the goal of identifying important problems and areas for new research and…
ERIC Educational Resources Information Center
Loehlin, John C.; And Others
The task group report presented in this publication is one of a series prepared by eminent psychologists who have served as consultants in the U.S.O.E.-sponsored grant study to conduct a Critical Appraisal of the Personality-Emotions-Motivation Domain. In order to attain the goal of identifying important problems and areas for new research and…
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
Simons, Claire L; Rivero-Arias, Oliver; Yu, Ly-Mee; Simon, Judit
2015-04-01
Missing data are a well-known and widely documented problem in cost-effectiveness analyses alongside clinical trials using individual patient-level data. Current methodological research recommends multiple imputation (MI) to deal with missing health outcome data, but there is little guidance on whether MI for multi-attribute questionnaires, such as the EQ-5D-3L, should be carried out at domain or at summary score level. In this paper, we evaluated the impact of imputing individual domains versus imputing index values to deal with missing EQ-5D-3L data using a simulation study and developed recommendations for future practice. We simulated missing data in a patient-level dataset with complete EQ-5D-3L data at one point in time from a large multinational clinical trial (n = 1,814). Different proportions of missing data were generated using a missing at random (MAR) mechanism and three different scenarios were studied. The performance of using each method was evaluated using root mean squared error and mean absolute error of the actual versus predicted EQ-5D-3L indices. In large sample sizes (n > 500) and a missing data pattern that follows mainly unit non-response, imputing domains or the index produced similar results. However, domain imputation became more accurate than index imputation with pattern of missingness following an item non-response. For smaller sample sizes (n < 100), index imputation was more accurate. When MI models were misspecified, both domain and index imputations were inaccurate for any proportion of missing data. The decision between imputing the domains or the EQ-5D-3L index scores depends on the observed missing data pattern and the sample size available for analysis. Analysts conducting this type of exercises should also evaluate the sensitivity of the analysis to the MAR assumption and whether the imputation model is correctly specified.
Crossover in growth laws for phase-separating binary fluids: molecular dynamics simulations.
Ahmad, Shaista; Das, Subir K; Puri, Sanjay
2012-03-01
Pattern and dynamics during phase separation in a symmetrical binary (A+B) Lennard-Jones fluid are studied via molecular dynamics simulations after quenching homogeneously mixed critical (50:50) systems to temperatures below the critical one. The morphology of the domains, rich in A or B particles, is observed to be bicontinuous. The early-time growth of the average domain size is found to be consistent with the Lifshitz-Slyozov law for diffusive domain coarsening. After a characteristic time, dependent on the temperature, we find a clear crossover to an extended viscous hydrodynamic regime where the domains grow linearly with time. Pattern formation in the present system is compared with that in solid binary mixtures, as a function of temperature. Important results for the finite-size and temperature effects on the small-wave-vector behavior of the scattering function are also presented.
Wind Power Forecasting Error Distributions: An International Comparison; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, B. M.; Lew, D.; Milligan, M.
2012-09-01
Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.
Entanglement renormalization, quantum error correction, and bulk causality
NASA Astrophysics Data System (ADS)
Kim, Isaac H.; Kastoryano, Michael J.
2017-04-01
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
Williams, Camille K.; Tremblay, Luc; Carnahan, Heather
2016-01-01
Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937
Errors and conflict at the task level and the response level.
Desmet, Charlotte; Fias, Wim; Hartstra, Egbert; Brass, Marcel
2011-01-26
In the last decade, research on error and conflict processing has become one of the most influential research areas in the domain of cognitive control. There is now converging evidence that a specific part of the posterior frontomedian cortex (pFMC), the rostral cingulate zone (RCZ), is crucially involved in the processing of errors and conflict. However, error-related research has focused primarily on a specific error type, namely, response errors. The aim of the present study was to investigate whether errors on the task level rely on the same neural and functional mechanisms. Here we report a dissociation of both error types in the pFMC: whereas response errors activate the RCZ, task errors activate the dorsal frontomedian cortex. Although this last region shows an overlap in activation for task and response errors on the group level, a closer inspection of the single-subject data is more in accordance with a functional anatomical dissociation. When investigating brain areas related to conflict on the task and response levels, a clear dissociation was perceived between areas associated with response conflict and with task conflict. Overall, our data support a dissociation between response and task levels of processing in the pFMC. In addition, we provide additional evidence for a dissociation between conflict and errors both at the response level and at the task level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudiarta, I. Wayan; Angraini, Lily Maysari, E-mail: lilyangraini@unram.ac.id
We have applied the finite difference time domain (FDTD) method with the supersymmetric quantum mechanics (SUSY-QM) procedure to determine excited energies of one dimensional quantum systems. The theoretical basis of FDTD, SUSY-QM, a numerical algorithm and an illustrative example for a particle in a one dimensional square-well potential were given in this paper. It was shown that the numerical results were in excellent agreement with theoretical results. Numerical errors produced by the SUSY-QM procedure was due to errors in estimations of superpotentials and supersymmetric partner potentials.
Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction
2009-08-20
Tangential stress optimization convergence to uniform value 1.797 as a function of eccentric anomaly E and Objective function value as a...up to the domain dimension, domainn . Equation (3.7) expands as truncation error round-off error decreasing step size FD e rr or 54...force, and E is Young’s modulus. Equations (3.31) and (3.32) may be directly integrated to yield the stress and displacement solutions, which, for no
Correlation Functions in Two-Dimensional Critical Systems with Conformal Symmetry
NASA Astrophysics Data System (ADS)
Flores, Steven Miguel
This thesis presents a study of certain conformal field theory (CFT) correlation functions that describe physical observables in conform ally invariant two-dimensional critical systems. These are typically continuum limits of critical lattice models in a domain within the complex plane and with a boundary. Certain clusters, called
Evaluation of a wave-vector-frequency-domain method for nonlinear wave propagation
Jing, Yun; Tao, Molei; Clement, Greg T.
2011-01-01
A wave-vector-frequency-domain method is presented to describe one-directional forward or backward acoustic wave propagation in a nonlinear homogeneous medium. Starting from a frequency-domain representation of the second-order nonlinear acoustic wave equation, an implicit solution for the nonlinear term is proposed by employing the Green’s function. Its approximation, which is more suitable for numerical implementation, is used. An error study is carried out to test the efficiency of the model by comparing the results with the Fubini solution. It is shown that the error grows as the propagation distance and step-size increase. However, for the specific case tested, even at a step size as large as one wavelength, sufficient accuracy for plane-wave propagation is observed. A two-dimensional steered transducer problem is explored to verify the nonlinear acoustic field directional independence of the model. A three-dimensional single-element transducer problem is solved to verify the forward model by comparing it with an existing nonlinear wave propagation code. Finally, backward-projection behavior is examined. The sound field over a plane in an absorptive medium is backward projected to the source and compared with the initial field, where good agreement is observed. PMID:21302985
NASA Astrophysics Data System (ADS)
Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.
NASA Astrophysics Data System (ADS)
Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi
2017-01-01
Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.
NASA Astrophysics Data System (ADS)
Sánchez-Arcilla, A.; Gracia, V.; García, M.
2014-02-01
This paper deals with the limits in hydrodynamic and morphodynamic predictions for semi-enclosed coastal domains subject to sharp gradients (in bathymetry, topography, sediment transport and coastal damages). It starts with an overview of wave prediction limits (based on satellite images) in a restricted domain such as is the Mediterranean basin, followed by an in-depth analysis of the Catalan coast, one of the land boundaries of such a domain. The morphodynamic modeling for such gradient regions is next illustrated with the simulation of the largest recorded storm in the Catalan coast, whose morphological impact is a key element of the storm impact. The driving wave and surge conditions produce a morphodynamic response that is validated against the pre and post storm beach state, recovered from two LIDAR images. The quality of the fit is discussed in terms of the physical processes and the suitability of the employed modeling equations. Some remarks about the role of the numerical discretization and boundary conditions are also included in the analysis. From here an assessment of errors and uncertainties is presented, with the aim of establishing the prediction limits for coastal engineering flooding and erosion analyses.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Jiang, Chenghui; Whitehill, Tara L
2014-04-01
Speech errors associated with cleft palate are well established for English and several other Indo-European languages. Few articles describing the speech of Putonghua (standard Mandarin Chinese) speakers with cleft palate have been published in English language journals. Although methodological guidelines have been published for the perceptual speech evaluation of individuals with cleft palate, there has been no critical review of methodological issues in studies of Putonghua speakers with cleft palate. A literature search was conducted to identify relevant studies published over the past 30 years in Chinese language journals. Only studies incorporating perceptual analysis of speech were included. Thirty-seven articles which met inclusion criteria were analyzed and coded on a number of methodological variables. Reliability was established by having all variables recoded for all studies. This critical review identified many methodological issues. These design flaws make it difficult to draw reliable conclusions about characteristic speech errors in this group of speakers. Specific recommendations are made to improve the reliability and validity of future studies, as well to facilitate cross-center comparisons.
GPS, BDS and Galileo ionospheric correction models: An evaluation in range delay and position domain
NASA Astrophysics Data System (ADS)
Wang, Ningbo; Li, Zishen; Li, Min; Yuan, Yunbin; Huo, Xingliang
2018-05-01
The performance of GPS Klobuchar (GPSKlob), BDS Klobuchar (BDSKlob) and NeQuick Galileo (NeQuickG) ionospheric correction models are evaluated in the range delay and position domains over China. The post-processed Klobuchar-style (CODKlob) coefficients provided by the Center for Orbit Determination in Europe (CODE) and our own fitted NeQuick coefficients (NeQuickC) are also included for comparison. In the range delay domain, BDS total electrons contents (TEC) derived from 20 international GNSS Monitoring and Assessment System (iGMAS) stations and GPS TEC obtained from 35 Crust Movement Observation Network of China (CMONC) stations are used as references. Compared to BDS TEC during the short period (doy 010-020, 2015), GPSKlob, BDSKlob and NeQuickG can correct 58.4, 66.7 and 54.7% of the ionospheric delay. Compared to GPS TEC for the long period (doy 001-180, 2015), the three ionospheric models can mitigate the ionospheric delay by 64.8, 65.4 and 68.1%, respectively. For the two comparison cases, CODKlob shows the worst performance, which only reduces 57.9% of the ionospheric range errors. NeQuickC exhibits the best performance, which outperforms GPSKlob, BDSKlob and NeQuickG by 6.7, 2.1 and 6.9%, respectively. In the position domain, single-frequency stand point positioning (SPP) was conducted at the selected 35 CMONC sites using GPS C/A pseudorange with and without ionospheric corrections. The vertical position error of the uncorrected case drops significantly from 10.3 m to 4.8, 4.6, 4.4 and 4.2 m for GPSKlob, CODKlob, BDSKlob and NeQuickG, however, the horizontal position error (3.2) merely decreases to 3.1, 2.7, 2.4 and 2.3 m, respectively. NeQuickG outperforms GPSKlob and BDSKlob by 5.8 and 1.9% in vertical component, and by 25.0 and 3.2% in horizontal component.
Beanland, Vanessa; Sellbom, Martin; Johnson, Alexandria K
2014-11-01
Personality traits are meaningful predictors of many significant life outcomes, including mortality. Several studies have investigated the relationship between specific personality traits and driving behaviours, e.g., aggression and speeding, in an attempt to identify traits associated with elevated crash risk. These studies, while valuable, are limited in that they examine only a narrow range of personality constructs and thus do not necessarily reveal which traits in constellation best predict aberrant driving behaviours. The primary aim of this study was to use a comprehensive measure of personality to investigate which personality traits are most predictive of four types of aberrant driving behaviour (Aggressive Violations, Ordinary Violations, Errors, Lapses) as indicated by the Manchester Driver Behaviour Questionnaire (DBQ). We recruited 285 young adults (67% female) from a university in the southeastern US. They completed self-report questionnaires including the DBQ and the Personality Inventory for DSM-5, which indexes 5 broad personality domains (Antagonism, Detachment, Disinhibition, Negative Affectivity, Psychoticism) and 25 specific trait facets. Confirmatory factor analysis showed adequate evidence for the DBQ internal structure. Structural regression analyses revealed that the personality domains of Antagonism and Negative Affectivity best predicted both Aggressive Violations and Ordinary Violations, whereas the best predictors of both Errors and Lapses were Negative Affectivity, Disinhibition and to a lesser extent Antagonism. A more nuanced analysis of trait facets revealed that Hostility was the best predictor of Aggressive Violations; Risk-taking and Hostility of Ordinary Violations; Irresponsibility, Separation Insecurity and Attention Seeking of Errors; and Perseveration and Irresponsibility of Lapses. Copyright © 2014 Elsevier Ltd. All rights reserved.
Discovering body site and severity modifiers in clinical texts
Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K
2014-01-01
Objective To research computational methods for discovering body site and severity modifiers in clinical texts. Methods We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. Results The performance of our method for discovering body site modifiers achieves F1 of 0.740–0.908 and our method for discovering severity modifiers achieves F1 of 0.905–0.929. Discussion Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. Conclusions We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES). PMID:24091648
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
Discovering body site and severity modifiers in clinical texts.
Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K
2014-01-01
To research computational methods for discovering body site and severity modifiers in clinical texts. We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. The performance of our method for discovering body site modifiers achieves F1 of 0.740-0.908 and our method for discovering severity modifiers achieves F1 of 0.905-0.929. Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES).
Understanding human management of automation errors
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042
Understanding human management of automation errors.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2014-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
Error disclosure: a new domain for safety culture assessment.
Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J
2012-07-01
To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1996-01-01
We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.
NASA Astrophysics Data System (ADS)
Li, Jiao; Hu, Guijun; Gong, Caili; Li, Li
2018-02-01
In this paper, we propose a hybrid time-frequency domain sign-sign joint decision multimodulus algorithm (Hybrid-SJDMMA) for mode-demultiplexing in a 6 × 6 mode division multiplexing (MDM) system with high-order QAM modulation. The equalization performance of Hybrid-SJDMMA was evaluated and compared with the frequency domain multimodulus algorithm (FD-MMA) and the hybrid time-frequency domain sign-sign multimodulus algorithm (Hybrid-SMMA). Simulation results revealed that Hybrid-SJDMMA exhibits a significantly lower computational complexity than FD-MMA, and its convergence speed is similar to that of FD-MMA. Additionally, the bit-error-rate performance of Hybrid-SJDMMA was obviously better than FD-MMA and Hybrid-SMMA for 16 QAM and 64 QAM.
A Review of System Identification Methods Applied to Aircraft
NASA Technical Reports Server (NTRS)
Klein, V.
1983-01-01
Airplane identification, equation error method, maximum likelihood method, parameter estimation in frequency domain, extended Kalman filter, aircraft equations of motion, aerodynamic model equations, criteria for the selection of a parsimonious model, and online aircraft identification are addressed.
MESA: Message-Based System Analysis Using Runtime Verification
NASA Technical Reports Server (NTRS)
Shafiei, Nastaran; Tkachuk, Oksana; Mehlitz, Peter
2017-01-01
In this paper, we present a novel approach and framework for run-time verication of large, safety critical messaging systems. This work was motivated by verifying the System Wide Information Management (SWIM) project of the Federal Aviation Administration (FAA). SWIM provides live air traffic, site and weather data streams for the whole National Airspace System (NAS), which can easily amount to several hundred messages per second. Such safety critical systems cannot be instrumented, therefore, verification and monitoring has to happen using a nonintrusive approach, by connecting to a variety of network interfaces. Due to a large number of potential properties to check, the verification framework needs to support efficient formulation of properties with a suitable Domain Specific Language (DSL). Our approach is to utilize a distributed system that is geared towards connectivity and scalability and interface it at the message queue level to a powerful verification engine. We implemented our approach in the tool called MESA: Message-Based System Analysis, which leverages the open source projects RACE (Runtime for Airspace Concept Evaluation) and TraceContract. RACE is a platform for instantiating and running highly concurrent and distributed systems and enables connectivity to SWIM and scalability. TraceContract is a runtime verication tool that allows for checking traces against properties specified in a powerful DSL. We applied our approach to verify a SWIM service against several requirements.We found errors such as duplicate and out-of-order messages.
Automatically Finding the Control Variables for Complex System Behavior
NASA Technical Reports Server (NTRS)
Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen
2010-01-01
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the factors most likely to cause a mission-critical failure. The goal of this research is to comparatively assess treatment learning against state-of-the-art numerical optimization techniques. To achieve this, this paper benchmarks the TAR3 and TAR4.1 treatment learners against optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. The results clearly show that treatment learning is both faster and more accurate than traditional optimization methods.
The HDAC complex and cytoskeleton.
Kovacs, Jeffery J; Hubbert, Charlotte; Yao, Tso-Pang
2004-01-01
HDAC6 is a cytoplasmic deacetylase that dynamically associates with the microtubule and actin cytoskeletons. HDAC6 regulates growth factor-induced chemotaxis by its unique deacetylase activity towards microtubules or other substrates. Here we describe a non-catalytic structural domain that is essential for HDAC6 function and places HDAC6 as a critical mediator linking the acetylation and ubiquitination network. This evolutionarily conserved motif, termed the BUZ domain, has features of a zinc finger and binds both mono- and polyubiquitinated proteins. Furthermore, the BUZ domain promotes HDAC6 mono-ubiquitination. These results establish the BUZ domain, in addition to the UIM and CUE domains, as a novel motif that both binds ubiquitin and mediates mono-ubiquitination. Importantly, the BUZ domain is essential for HDAC6 to promote chemotaxis, indicating that communication with the ubiquitin network is critical for proper HDAC6 function. The unique presence of the UIM and CUE domains in proteins involved in endocytic trafficking suggests that HDAC6 might also regulate vesicle transport and protein degradation. Indeed, we have found that HDAC6 is actively transported and concentrated in vesicular compartments. We propose that an integration of reversible acetylation and ubiquitination by HDAC6 may be a novel component in regulating the cytoskeleton, vesicle transport and protein degradation.
Meyer-Massetti, Carla; Krummenacher, Evelyne; Hedinger-Grogg, Barbara; Luterbacher, Stephan; Hersberger, Kurt E
2016-09-01
Background: While drug-related problems are among the most frequent adverse events in health care, little is known about their type and prevalence in home care in the current literature. The use of a Critical Incident Reporting System (CIRS), known as an economic and efficient tool to record medication errors for subsequent analysis, is widely implemented in inpatient care, but less established in ambulatory care. Recommendations on a possible format are scarce. A manual CIRS was developed based on the literature and subsequently piloted and implemented in a Swiss home care organization. Aim: The aim of this work was to implement a critical incident reporting system specifically for medication safety in home care. Results: The final CIRS form was well accepted among staff. Requiring limited resources, it allowed preliminary identification and trending of medication errors in home care. The most frequent error reports addressed medication preparation at the patients’ home, encompassing the following errors: omission (30 %), wrong dose (17.5 %) and wrong time (15 %). The most frequent underlying causes were related to working conditions (37.9 %), lacking attention (68.2 %), time pressure (22.7 %) and interruptions by patients (9.1 %). Conclusions: A manual CIRS allowed efficient data collection and subsequent analysis of medication errors in order to plan future interventions for improvement of medication safety. The development of an electronic CIRS would allow a reduction of the expenditure of time regarding data collection and analysis. In addition, it would favour the development of a national CIRS network among home care institutions.
Evaluating Student Achievement in Discipline-Based Art Programs.
ERIC Educational Resources Information Center
Day, Michael D.
1985-01-01
The discipline-based view of art education requires that students progress in all of the four domains of art learning: art history, art criticism, aesthetic appreciation, and creative production. Evaluation methods in each of these domains are discussed. (RM)
Direct Numerical Simulations of an Unpremixed Turbulent Jet Flame
1988-03-01
shear layer. As the vortices reach the outflow boundary, the zero-gradient condition seems to allow them to travel out of the computational domain...ei Ii-t salp . Thiey- used pint ihe ustial d ependent variales. I1 Imerefre for r’- H(1/2 act1ig flows tile dimniensiomalit v of tilie vystveni call...seems to allow them to travel out of the computational domain. As mentioned in the previous section, the errors associated with this boundary condition
Error Estimation and Compensation in Reduced Dynamic Models of Large Space Structures
1987-04-23
PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (if aplicable ) AFWAL I FIBRA F33615-84-C-3219 8c. ADDRESS (City, Stateand ZIP Code) ?0 SOURCE...10 Modes of the Full Model 15 5 Comparison of Various Reduced Models 18 6 Driving Point Mobilities , Wing Tip (Z55) 19 7 Driving Point Mobilities , Wing...Root Trailing Edge (Z19) 20 8 AMI Improvement 23 9 Frequency Domain Solution, Driving Point Mobilities , Wing Tip (Z55), RM1I 25 10 Frequency Domain
Bounded Error Schemes for the Wave Equation on Complex Domains
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi; Yefet, Amir
1998-01-01
This paper considers the application of the method of boundary penalty terms ("SAT") to the numerical solution of the wave equation on complex shapes with Dirichlet boundary conditions. A theory is developed, in a semi-discrete setting, that allows the use of a Cartesian grid on complex geometries, yet maintains the order of accuracy with only a linear temporal error-bound. A numerical example, involving the solution of Maxwell's equations inside a 2-D circular wave-guide demonstrates the efficacy of this method in comparison to others (e.g. the staggered Yee scheme) - we achieve a decrease of two orders of magnitude in the level of the L2-error.
Optimal Multi-Type Sensor Placement for Structural Identification by Static-Load Testing
Papadopoulou, Maria; Vernay, Didier; Smith, Ian F. C.
2017-01-01
Assessing ageing infrastructure is a critical challenge for civil engineers due to the difficulty in the estimation and integration of uncertainties in structural models. Field measurements are increasingly used to improve knowledge of the real behavior of a structure; this activity is called structural identification. Error-domain model falsification (EDMF) is an easy-to-use model-based structural-identification methodology which robustly accommodates systematic uncertainties originating from sources such as boundary conditions, numerical modelling and model fidelity, as well as aleatory uncertainties from sources such as measurement error and material parameter-value estimations. In most practical applications of structural identification, sensors are placed using engineering judgment and experience. However, since sensor placement is fundamental to the success of structural identification, a more rational and systematic method is justified. This study presents a measurement system design methodology to identify the best sensor locations and sensor types using information from static-load tests. More specifically, three static-load tests were studied for the sensor system design using three types of sensors for a performance evaluation of a full-scale bridge in Singapore. Several sensor placement strategies are compared using joint entropy as an information-gain metric. A modified version of the hierarchical algorithm for sensor placement is proposed to take into account mutual information between load tests. It is shown that a carefully-configured measurement strategy that includes multiple sensor types and several load tests maximizes information gain. PMID:29240684
Error-Related Electrocortical Responses in 10-Year-Old Children and Young Adults
ERIC Educational Resources Information Center
Santesso, Diane L.; Segalowitz, Sidney J.; Schmidt, Louis A.
2006-01-01
Recent anatomical and electrophysiological evidence suggests that the anterior cingulate cortex (ACC) is relatively late to mature. This brain region appears to be critical for monitoring, evaluating, and adjusting ongoing behaviors. This monitoring elicits characteristic ERP components including the error-related negativity (ERN), error…
Minimizing Accidents and Risks in High Adventure Outdoor Pursuits.
ERIC Educational Resources Information Center
Meier, Joel
The fundamental dilemma in adventure programming is eliminating unreasonable risks to participants without also reducing levels of excitement, challenge, and stress. Most accidents are caused by a combination of unsafe conditions, unsafe acts, and error judgments. The best and only way to minimize critical human error in adventure programs is…
NASA Astrophysics Data System (ADS)
Nunez, F.; Romero, A.; Clua, J.; Mas, J.; Tomas, A.; Catalan, A.; Castellsaguer, J.
2005-08-01
MARES (Muscle Atrophy Research and Exercise System) is a computerized ergometer for neuromuscular research to be flown and installed onboard the International Space Station in 2007. Validity of data acquired depends on controlling and reducing all significant error sources. One of them is the misalignment of the joint rotation axis with respect to the motor axis.The error induced on the measurements is proportional to the misalignment between both axis. Therefore, the restraint system's performance is critical [1]. MARES HRS (Human Restraint System) assures alignment within an acceptable range while performing the exercise (results: elbow movement:13.94mm+/-5.45, Knee movement: 22.36mm+/- 6.06 ) and reproducibility of human positioning (results: elbow movement: 2.82mm+/-1.56, Knee movement 7.45mm+/-4.8 ). These results allow limiting measurement errors induced by misalignment.
Comparison of medication safety effectiveness among nine critical access hospitals.
Cochran, Gary L; Haynatzki, Gleb
2013-12-15
The rates of medication errors across three different medication dispensing and administration systems frequently used in critical access hospitals (CAHs) were analyzed. Nine CAHs agreed to participate in this prospective study and were assigned to one of three groups based on similarities in their medication-use processes: (1) less than 10 hours per week of onsite pharmacy support and no bedside barcode system, (2) onsite pharmacy support for 40 hours per week and no bedside barcode system, and (3) onsite pharmacy support for 40 or more hours per week with a bedside barcode system. Errors were characterized by severity, phase of origination, type, and cause. Characteristics of the medication being administered and a number of best practices were collected for each medication pass. Logistic regression was used to identify significant predictors of errors. A total of 3103 medication passes were observed. More medication errors originated in hospitals that had onsite pharmacy support for less than 10 hours per week and no bedside barcode system than in other types of hospitals. A bedside barcode system had the greatest impact on lowering the odds of an error reaching the patient. Wrong dose and omission were common error types. Human factors and communication were the two most frequently identified causes of error for all three systems. Medication error rates were lower in CAHs with 40 or more hours per week of onsite pharmacy support with or without a bedside barcode system compared with hospitals with less than 10 hours per week of pharmacy support and no bedside barcode system.
Cochran, Gary L; Barrett, Ryan S; Horn, Susan D
2016-08-01
The role of pharmacist transcription, onsite pharmacist dispensing, use of automated dispensing cabinets (ADCs), nurse-nurse double checks, or barcode-assisted medication administration (BCMA) in reducing medication error rates in critical access hospitals (CAHs) was evaluated. Investigators used the practice-based evidence methodology to identify predictors of medication errors in 12 Nebraska CAHs. Detailed information about each medication administered was recorded through direct observation. Errors were identified by comparing the observed medication administered with the physician's order. Chi-square analysis and Fisher's exact test were used to measure differences between groups of medication-dispensing procedures. Nurses observed 6497 medications being administered to 1374 patients. The overall error rate was 1.2%. The transcription error rates for orders transcribed by an onsite pharmacist were slightly lower than for orders transcribed by a telepharmacy service (0.10% and 0.33%, respectively). Fewer dispensing errors occurred when medications were dispensed by an onsite pharmacist versus any other method of medication acquisition (0.10% versus 0.44%, p = 0.0085). The rates of dispensing errors for medications that were retrieved from a single-cell ADC (0.19%), a multicell ADC (0.45%), or a drug closet or general supply (0.77%) did not differ significantly. BCMA was associated with a higher proportion of dispensing and administration errors intercepted before reaching the patient (66.7%) compared with either manual double checks (10%) or no BCMA or double check (30.4%) of the medication before administration (p = 0.0167). Onsite pharmacist dispensing and BCMA were associated with fewer medication errors and are important components of a medication safety strategy in CAHs. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Clinical Reasoning in Athletic Training Education: Modeling Expert Thinking
ERIC Educational Resources Information Center
Geisler, Paul R.; Lazenby, Todd W.
2009-01-01
Objective: To address the need for a more definitive approach to critical thinking during athletic training educational experiences by introducing the clinical reasoning model for critical thinking. Background: Educators are aware of the need to teach students how to think critically. The multiple domains of athletic training are comprehensive and…
New Developments in Error Detection and Correction Strategies for Critical Applications
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Ken
2017-01-01
The presentation will cover a variety of mitigation strategies that were developed for critical applications. An emphasis is placed on strengths and weaknesses per mitigation technique as it pertains to different Field programmable gate array (FPGA) device types.
[Failure modes and effects analysis in the prescription, validation and dispensing process].
Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T
2012-01-01
To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN.
Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís
2018-01-11
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN
Pedreiras, Paulo; Almeida, Luís
2018-01-01
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723
Using nurses and office staff to report prescribing errors in primary care.
Kennedy, Amanda G; Littenberg, Benjamin; Senders, John W
2008-08-01
To implement a prescribing-error reporting system in primary care offices and analyze the reports. Descriptive analysis of a voluntary prescribing-error-reporting system Seven primary care offices in Vermont, USA. One hundred and three prescribers, managers, nurses and office staff. Nurses and office staff were asked to report all communications with community pharmacists regarding prescription problems. All reports were classified by severity category, setting, error mode, prescription domain and error-producing conditions. All practices submitted reports, although reporting decreased by 3.6 reports per month (95% CI, -2.7 to -4.4, P<0.001, by linear regression analysis). Two hundred and sixteen reports were submitted. Nearly 90% (142/165) of errors were severity Category B (errors that did not reach the patient) according to the National Coordinating Council for Medication Error Reporting and Prevention Index for Categorizing Medication Errors. Nineteen errors reached the patient without causing harm (Category C); and 4 errors caused temporary harm requiring intervention (Category E). Errors involving strength were found in 30% of reports, including 23 prescriptions written for strengths not commercially available. Antidepressants, narcotics and antihypertensives were the most frequent drug classes reported. Participants completed an exit survey with a response rate of 84.5% (87/103). Nearly 90% (77/87) of respondents were willing to continue reporting after the study ended, however none of the participants currently submit reports. Nurses and office staff are a valuable resource for reporting prescribing errors. However, without ongoing reminders, the reporting system is not sustainable.
Hendry, Kathryn; Ownsworth, Tamara; Beadle, Elizabeth; Chevignard, Mathilde P.; Fleming, Jennifer; Griffin, Janelle; Shum, David H. K.
2016-01-01
People with severe traumatic brain injury (TBI) often make errors on everyday tasks that compromise their safety and independence. Such errors potentially arise from the breakdown or failure of multiple cognitive processes. This study aimed to investigate cognitive deficits underlying error behavior on a home-based version of the Cooking Task (HBCT) following TBI. Participants included 45 adults (9 females, 36 males) with severe TBI aged 18–64 years (M = 37.91, SD = 13.43). Participants were administered the HBCT in their home kitchens, with audiovisual recordings taken to enable scoring of total errors and error subtypes (Omissions, Additions, Estimations, Substitutions, Commentary/Questions, Dangerous Behavior, Goal Achievement). Participants also completed a battery of neuropsychological tests, including the Trail Making Test, Hopkins Verbal Learning Test-Revised, Digit Span, Zoo Map test, Modified Stroop Test, and Hayling Sentence Completion Test. After controlling for cooking experience, greater Omissions and Estimation errors, lack of goal achievement, and longer completion time were significantly associated with poorer attention, memory, and executive functioning. These findings indicate that errors on naturalistic tasks arise from deficits in multiple cognitive domains. Assessment of error behavior in a real life setting provides insight into individuals' functional abilities which can guide rehabilitation planning and lifestyle support. PMID:27790099
Comprehensive analysis of a medication dosing error related to CPOE.
Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L
2005-01-01
This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.
Theory of mind in schizophrenia: error types and associations with symptoms.
Fretland, Ragnhild A; Andersson, Stein; Sundet, Kjetil; Andreassen, Ole A; Melle, Ingrid; Vaskinn, Anja
2015-03-01
Social cognition is an important determinant of functioning in schizophrenia. However, how social cognition relates to the clinical symptoms of schizophrenia is still unclear. The aim of this study was to explore the relationship between a social cognition domain, Theory of Mind (ToM), and the clinical symptoms of schizophrenia. Specifically, we investigated the associations between three ToM error types; 1) "overmentalizing" 2) "reduced ToM and 3) "no ToM", and positive, negative and disorganized symptoms. Fifty-two participants with a diagnosis of schizophrenia or schizoaffective disorder were assessed with the Movie for the Assessment of Social Cognition (MASC), a video-based ToM measure. An empirically validated five-factor model of the Positive and Negative Syndrome Scale (PANSS) was used to assess clinical symptoms. There was a significant, small-moderate association between overmentalizing and positive symptoms (rho=.28, p=.04). Disorganized symptoms correlated at a trend level with "reduced ToM" (rho=.27, p=.05). There were no other significant correlations between ToM impairments and symptom levels. Positive/disorganized symptoms did not contribute significantly in explaining total ToM performance, whereas IQ did (B=.37, p=.01). Within the undermentalizing domain, participants performed more "reduced ToM" errors than "no ToM" errors. Overmentalizing was associated with positive symptoms. The undermentalizing error types were unrelated to symptoms, but "reduced ToM" was somewhat associated to disorganization. The higher number of "reduced ToM" responses suggests that schizophrenia is characterized by accuracy problems rather than a fundamental lack of mental state concept. The findings call for the use of more sensitive measures when investigating ToM in schizophrenia to avoid the "right/wrong ToM"-dichotomy. Copyright © 2015 Elsevier B.V. All rights reserved.
Electric Field Induced Interfacial Instabilities
NASA Technical Reports Server (NTRS)
Kusner, Robert E.; Min, Kyung Yang; Wu, Xiao-Lun; Onuki, Akira
1996-01-01
The study of the interface in a charge-free, nonpolar, critical and near-critical binary fluid in the presence of an externally applied electric field is presented. At sufficiently large fields, the interface between the two phases of the binary fluid should become unstable and exhibit an undulation with a predefined wavelength on the order of the capillary length. As the critical point is approached, this wavelength is reduced, potentially approaching length-scales such as the correlation length or critical nucleation radius. At this point the critical properties of the system may be affected. In zero gravity, the interface is unstable at all long wavelengths in the presence of a field applied across it. It is conjectured that this will cause the binary fluid to break up into domains small enough to be outside the instability condition. The resulting pattern formation, and the effects on the critical properties as the domains approach the correlation length are of acute interest. With direct observation, laser light scattering, and interferometry, the phenomena can be probed to gain further understanding of interfacial instabilities and the pattern formation which results, and dimensional crossover in critical systems as the critical fluctuations in a particular direction are suppressed by external forces.
An Improved Neutron Transport Algorithm for HZETRN2006
NASA Astrophysics Data System (ADS)
Slaba, Tony
NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.
NASA Astrophysics Data System (ADS)
Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.
2017-10-01
We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.
Self-referenced locking of optical coherence by single-detector electronic-frequency tagging
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Spring, Justin; Ward, Benjamin; Ghebremichael, F.; Culpepper, Mark A.; Sanchez, Anthony D.; Baker, J. T.; Pilkington, D.; Berdine, Richard
2006-02-01
We report a novel coherent beam combining technique. This is the first actively phase locked optical fiber array that eliminates the need for a separate reference beam. In addition, only a single photodetector is required. The far-field central spot of the array is imaged onto the photodetector to produce the phase control loop signals. Each leg of the fiber array is phase modulated with a separate RF frequency, thus tagging the optical phase shift for each leg by a separate RF frequency. The optical phase errors for the individual array legs are separated in the electronic domain. In contrast with the previous active phase locking techniques, in our system the reference beam is spatially overlapped with all the RF modulated fiber leg beams onto a single detector. The phase shift between the optical wave in the reference leg and in the RF modulated legs is measured separately in the electronic domain and the phase error signal is feedback to the LiNbO 3 phase modulator for that leg to minimize the phase error for that leg relative to the reference leg. The advantages of this technique are 1) the elimination of the reference beam and beam combination optics and 2) the electronic separation of the phase error signals without any degradation of the phase locking accuracy. We will present the first theoretical model for self-referenced LOCSET and describe experimental results for a 3 x 3 array.
Structural basis for regulation of GPR56/ADGRG1 by its alternatively spliced extracellular domains
Salzman, Gabriel S.; Ackerman, Sarah D.; Ding, Chen; Koide, Akiko; Leon, Katherine; Luo, Rong; Stoveken, Hannah M.; Fernandez, Celia G.; Tall, Gregory G.; Piao, Xianhua; Monk, Kelly R.; Koide, Shohei; Araç, Demet
2016-01-01
Summary Adhesion G-protein-coupled receptors (aGPCRs) play critical roles in diverse neurobiological processes including brain development, synaptogenesis, and myelination. aGPCRs have large alternatively spliced extracellular regions (ECRs) that likely mediate intercellular signaling; however, the precise roles of ECRs remain unclear. The aGPCR GPR56/ADGRG1 regulates both oligodendrocyte and cortical development. Accordingly, human GPR56 mutations cause myelination defects and brain malformations. Here, we determined the crystal structure of the GPR56 ECR, the first structure of any complete aGPCR ECR, in complex with an inverse-agonist monobody, revealing a GPCR-Autoproteolysis-Inducing domain and a previously unidentified domain that we term Pentraxin/Laminin/neurexin/sex-hormone-binding-globulin-Like (PLL). Strikingly, PLL domain deletion caused increased signaling and characterizes a GPR56 splice variant. Finally, we show that an evolutionarily conserved residue in the PLL domain is critical for oligodendrocyte development in vivo. Thus, our results suggest that the GPR56 ECR has unique and multifaceted regulatory functions, providing novel insights into aGPCR roles in neurobiology. PMID:27657451
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Yang; Ramanathan, Arvind; Glover, Karen
BECN1 is essential for autophagy, a critical eukaryotic cellular homeostasis pathway. Here we delineate a highly conserved BECN1 domain located between previously characterized BH3 and coiled-coil domains and elucidate its structure and role in autophagy. The 2.0 angstrom sulfur-single-wavelength anomalous dispersion X-ray crystal structure of this domain demonstrates that its N-terminal half is unstructured while its C-terminal half is helical; hence, we name it the flexible helical domain (FHD). Circular dichroism spectroscopy, double electron electron resonance electron paramagnetic resonance, and small-angle X-ray scattering (SAXS) analyses confirm that the FHD is partially disordered, even in the context of adjacent BECN1 domains.more » Molecular dynamic simulations fitted to SAXS data indicate that the FHD transiently samples more helical conformations. FHD helicity increases in 2,2,2-trifluoroethanol, suggesting it may become more helical upon binding. Lastly, cellular studies show that conserved FHD residues are required for starvation-induced autophagy. Thus, the FHD likely undergoes a binding-associated disorder to-helix transition, and conserved residues critical for this interaction are essential for starvation-induced autophagy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Yang; Ramanathan, Arvind; Glover, Karen
BECN1 is essential for autophagy, a critical eukaryotic cellular homeostasis pathway. Here in this study, we delineate a highly conserved BECN1 domain located between previously characterized BH3 and coiled-coil domains and elucidate its structure and role in autophagy. The 2.0 Å sulfur-single-wavelength anomalous dispersion X-ray crystal structure of this domain demonstrates that its N-terminal half is unstructured while its C-terminal half is helical; hence, we name it the flexible helical domain (FHD). Circular dichroism spectroscopy, double electron–electron resonance–electron paramagnetic resonance, and small-angle X-ray scattering (SAXS) analyses confirm that the FHD is partially disordered, even in the context of adjacent BECN1more » domains. Molecular dynamic simulations fitted to SAXS data indicate that the FHD transiently samples more helical conformations. FHD helicity increases in 2,2,2-trifluoroethanol, suggesting it may become more helical upon binding. Finally, cellular studies show that conserved FHD residues are required for starvation-induced autophagy. Thus, the FHD likely undergoes a binding-associated disorder-to-helix transition, and conserved residues critical for this interaction are essential for starvation-induced autophagy.« less
Autophagic Regulation of p62 is Critical for Cancer Therapy
Islam, Md. Ariful; Sooro, Mopa Alina
2018-01-01
Sequestosome1 (p62/SQSTM 1) is a multidomain protein that interacts with the autophagy machinery as a key adaptor of target cargo. It interacts with phagophores through the LC3-interacting (LIR) domain and with the ubiquitinated protein aggregates through the ubiquitin-associated domain (UBA) domain. It sequesters the target cargo into inclusion bodies by its PB1 domain. This protein is further the central hub that interacts with several key signaling proteins. Emerging evidence implicates p62 in the induction of multiple cellular oncogenic transformations. Indeed, p62 upregulation and/or reduced degradation have been implicated in tumor formation, cancer promotion as well as in resistance to therapy. It has been established that the process of autophagy regulates the levels of p62. Autophagy-dependent apoptotic activity of p62 is recently being reported. It is evident that p62 plays a critical role in both autophagy and apoptosis. Therefore in this review we discuss the role of p62 in autophagy, apoptosis and cancer through its different domains and outline the importance of modulating cellular levels of p62 in cancer therapeutics. PMID:29738493
Autophagic Regulation of p62 is Critical for Cancer Therapy.
Islam, Md Ariful; Sooro, Mopa Alina; Zhang, Pinghu
2018-05-08
Sequestosome1 (p62/SQSTM 1) is a multidomain protein that interacts with the autophagy machinery as a key adaptor of target cargo. It interacts with phagophores through the LC3-interacting (LIR) domain and with the ubiquitinated protein aggregates through the ubiquitin-associated domain (UBA) domain. It sequesters the target cargo into inclusion bodies by its PB1 domain. This protein is further the central hub that interacts with several key signaling proteins. Emerging evidence implicates p62 in the induction of multiple cellular oncogenic transformations. Indeed, p62 upregulation and/or reduced degradation have been implicated in tumor formation, cancer promotion as well as in resistance to therapy. It has been established that the process of autophagy regulates the levels of p62. Autophagy-dependent apoptotic activity of p62 is recently being reported. It is evident that p62 plays a critical role in both autophagy and apoptosis. Therefore in this review we discuss the role of p62 in autophagy, apoptosis and cancer through its different domains and outline the importance of modulating cellular levels of p62 in cancer therapeutics.
NASA Astrophysics Data System (ADS)
Srinivasa Rao, K.; Ranga Nayakulu, S. V.; Chaitanya Varma, M.; Choudary, G. S. V. R. K.; Rao, K. H.
2018-04-01
The present investigation describes the development of cobalt ferrite nanoparticles having size less than 10 nm, by a sol-gel method using polyvinyl alcohol as chelating agent. X-ray results show all the samples, annealed above 700 °C have spinel structure. The information about phase evolution with reaction temperatures was obtained by subjecting the as-prepared powder for DSC/TGA study. High saturation magnetization of 84.63 emu/g has been observed for a particle size of 8.1 nm, a rare event reported till date. The dM/dH versus H curves suggest that the transition from single domain state to multi-domain state occurs with increasing annealing temperature and the critical size for the single domain nature of CoFe2O4 is around 6.5 nm. The estimated critical diameter for single domain particle (6.7 nm) is in good agreement with that (6.5 nm) obtained from Transmission Electron Micrographs. The highest coercivity (1645 Oe) has been found for a particle of size 6.5 nm.
Structure of the dimerization domain of DiGeorge Critical Region 8
Senturia, Rachel; Faller, Michael; Yin, Sheng; Loo, Joseph A; Cascio, Duilio; Sawaya, Michael R; Hwang, Daniel; Clubb, Robert T; Guo, Feng
2010-01-01
Maturation of microRNAs (miRNAs, ∼22nt) from long primary transcripts [primary miRNAs (pri-miRNAs)] is regulated during development and is altered in diseases such as cancer. The first processing step is a cleavage mediated by the Microprocessor complex containing the Drosha nuclease and the RNA-binding protein DiGeorge critical region 8 (DGCR8). We previously reported that dimeric DGCR8 binds heme and that the heme-bound DGCR8 is more active than the heme-free form. Here, we identified a conserved dimerization domain in DGCR8. Our crystal structure of this domain (residues 298–352) at 1.7 Å resolution demonstrates a previously unknown use of a WW motif as a platform for extensive dimerization interactions. The dimerization domain of DGCR8 is embedded in an independently folded heme-binding domain and directly contributes to association with heme. Heme-binding-deficient DGCR8 mutants have reduced pri-miRNA processing activity in vitro. Our study provides structural and biochemical bases for understanding how dimerization and heme binding of DGCR8 may contribute to regulation of miRNA biogenesis. PMID:20506313
Terkola, R; Czejka, M; Bérubé, J
2017-08-01
Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.
Kuang, Guanglin; Liang, Lijun; Brown, Christian; Wang, Qi; Bulone, Vincent; Tu, Yaoquan
2016-02-21
The critical role of chitin synthases in oomycete hyphal tip growth has been established. A microtubule interacting and trafficking (MIT) domain was discovered in the chitin synthases of the oomycete model organism, Saprolegnia monoica. MIT domains have been identified in diverse proteins and may play a role in intracellular trafficking. The structure of the Saprolegnia monoica chitin synthase 1 (SmChs1) MIT domain has been recently determined by our group. However, although our in vitro assay identified increased strength in interactions between the MIT domain and phosphatidic acid (PA) relative to other phospholipids including phosphatidylcholine (PC), the mechanism used by the MIT domain remains unknown. In this work, the adsorption behavior of the SmChs1 MIT domain on POPA and POPC membranes was systematically investigated by molecular dynamics simulations. Our results indicate that the MIT domain can adsorb onto the tested membranes in varying orientations. Interestingly, due to the specific interactions between MIT residues and lipid molecules, the binding affinity to the POPA membrane is much higher than that to the POPC membrane. A binding hotspot, which is critical for the adsorption of the MIT domain onto the POPA membrane, was also identified. The lower binding affinity to the POPC membrane can be attributed to the self-saturated membrane surface, which is unfavorable for hydrogen-bond and electrostatic interactions. The present study provides insight into the adsorption profile of SmChs1 and additionally has the potential to improve our understanding of other proteins containing MIT domains.
Lai, Alex L; Moorthy, Anna Eswara; Li, Yinling; Tamm, Lukas K
2012-04-20
The human immunodeficiency virus (HIV) gp41 fusion domain plays a critical role in membrane fusion during viral entry. A thorough understanding of the relationship between the structure and the activity of the fusion domain in different lipid environments helps to formulate mechanistic models on how it might function in mediating membrane fusion. The secondary structure of the fusion domain in small liposomes composed of different lipid mixtures was investigated by circular dichroism spectroscopy. The fusion domain formed an α-helix in membranes containing less than 30 mol% cholesterol and formed β-sheet secondary structure in membranes containing ≥30 mol% cholesterol. EPR spectra of spin-labeled fusion domains also indicated different conformations in membranes with and without cholesterol. Power saturation EPR data were further used to determine the orientation and depth of α-helical fusion domains in lipid bilayers. Fusion and membrane perturbation activities of the gp41 fusion domain were measured by lipid mixing and contents leakage. The fusion domain fused membranes in both its helical form and its β-sheet form. High cholesterol, which induced β-sheets, promoted fusion; however, acidic lipids, which promoted relatively deep membrane insertion as an α-helix, also induced fusion. The results indicate that the structure of the HIV gp41 fusion domain is plastic and depends critically on the lipid environment. Provided that their membrane insertion is deep, α-helical and β-sheet conformations contribute to membrane fusion. Copyright © 2012 Elsevier Ltd. All rights reserved.
Human systems integration in remotely piloted aircraft operations.
Tvaryanas, Anthony P
2006-12-01
The role of humans in remotely piloted aircraft (RPAs) is qualitatively different from manned aviation, lessening the applicability of aerospace medicine human factors knowledge derived from traditional cockpits. Aerospace medicine practitioners should expect to be challenged in addressing RPA crewmember performance. Human systems integration (HSI) provides a model for explaining human performance as a function of the domains of: human factors engineering; personnel; training; manpower; environment, safety, and occupational health (ESOH); habitability; and survivability. RPA crewmember performance is being particularly impacted by issues involving the domains of human factors engineering, personnel, training, manpower, ESOH, and habitability. Specific HSI challenges include: 1) changes in large RPA operator selection and training; 2) human factors engineering deficiencies in current RPA ground control station design and their impact on human error including considerations pertaining to multi-aircraft control; and 3) the combined impact of manpower shortfalls, shiftwork-related fatigue, and degraded crewmember effectiveness. Limited experience and available research makes it difficult to qualitatively or quantitatively predict the collective impact of these issues on RPA crewmember performance. Attending to HSI will be critical for the success of current and future RPA crewmembers. Aerospace medicine practitioners working with RPA crewmembers should gain first-hand knowledge of their task environment while the larger aerospace medicine community needs to address the limited information available on RPA-related aerospace medicine human factors. In the meantime, aeromedical decisions will need to be made based on what is known about other aerospace occupations, realizing this knowledge may have only partial applicability.
Issues with data and analyses: Errors, underlying themes, and potential solutions
Allison, David B.
2018-01-01
Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079
Technology utilization to prevent medication errors.
Forni, Allison; Chu, Hanh T; Fanikos, John
2010-01-01
Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.
Blood specimen labelling errors: Implications for nephrology nursing practice.
Duteau, Jennifer
2014-01-01
Patient safety is the foundation of high-quality health care, as recognized both nationally and worldwide. Patient blood specimen identification is critical in ensuring the delivery of safe and appropriate care. The practice of nephrology nursing involves frequent patient blood specimen withdrawals to treat and monitor kidney disease. A critical review of the literature reveals that incorrect patient identification is one of the major causes of blood specimen labelling errors. Misidentified samples create a serious risk to patient safety leading to multiple specimen withdrawals, delay in diagnosis, misdiagnosis, incorrect treatment, transfusion reactions, increased length of stay and other negative patient outcomes. Barcode technology has been identified as a preferred method for positive patient identification leading to a definitive decrease in blood specimen labelling errors by as much as 83% (Askeland, et al., 2008). The use of a root cause analysis followed by an action plan is one approach to decreasing the occurrence of blood specimen labelling errors. This article will present a review of the evidence-based literature surrounding blood specimen labelling errors, followed by author recommendations for completing a root cause analysis and action plan. A failure modes and effects analysis (FMEA) will be presented as one method to determine root cause, followed by the Ottawa Model of Research Use (OMRU) as a framework for implementation of strategies to reduce blood specimen labelling errors.
HangOut: generating clean PSI-BLAST profiles for domains with long insertions.
Kim, Bong-Hyun; Cong, Qian; Grishin, Nick V
2010-06-15
Profile-based similarity search is an essential step in structure-function studies of proteins. However, inclusion of non-homologous sequence segments into a profile causes its corruption and results in false positives. Profile corruption is common in multidomain proteins, and single domains with long insertions are a significant source of errors. We developed a procedure (HangOut) that, for a single domain with specified insertion position, cleans erroneously extended PSI-BLAST alignments to generate better profiles. HangOut is implemented in Python 2.3 and runs on all Unix-compatible platforms. The source code is available under the GNU GPL license at http://prodata.swmed.edu/HangOut/. Supplementary data are available at Bioinformatics online.
Nucleation of holin domains and holes optimizes lysis timing of E. coli by phage λ
NASA Astrophysics Data System (ADS)
Ryan, Gillian; Rutenberg, Andrew
2007-03-01
Holin proteins regulate the precise scheduling of Escherichia coli lysis during infection by bacteriophage λ. Inserted into the host bacterium's inner membrane during infection, holins aggregate to form rafts and then holes within those rafts. We present a two-stage nucleation model of holin action, with the nucleation of condensed holin domains followed by the nucleation of holes within these domains. Late nucleation of holin rafts leads to a weak dependence of lysis timing on host cell size, though both nucleation events contribute equally to timing errors. Our simulations recover the accurate scheduling observed experimentally, and also suggest that phage-λ lysis of E.coli is optimized.
Davis, Matthew H.
2016-01-01
Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains. PMID:27846209