Sample records for understanding structural errors

  1. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. The Zero Product Principle Error.

    ERIC Educational Resources Information Center

    Padula, Janice

    1996-01-01

    Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)

  3. An Intuitive Graphical Approach to Understanding the Split-Plot Experiment

    ERIC Educational Resources Information Center

    Robinson, Timothy J.; Brenneman, William A.; Myers, William R.

    2009-01-01

    While split-plot designs have received considerable attention in the literature over the past decade, there seems to be a general lack of intuitive understanding of the error structure of these designs and the resulting statistical analysis. Typically, students learn the proper error terms for testing factors of a split-plot design via "expected…

  4. Linking models and data on vegetation structure

    NASA Astrophysics Data System (ADS)

    Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.

    2010-06-01

    For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.

  5. Investigating the influence of LiDAR ground surface errors on the utility of derived forest inventories

    Treesearch

    Wade T. Tinkham; Alistair M. S. Smith; Chad Hoffman; Andrew T. Hudak; Michael J. Falkowski; Mark E. Swanson; Paul E. Gessler

    2012-01-01

    Light detection and ranging, or LiDAR, effectively produces products spatially characterizing both terrain and vegetation structure; however, development and use of those products has outpaced our understanding of the errors within them. LiDAR's ability to capture three-dimensional structure has led to interest in conducting or augmenting forest inventories with...

  6. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    PubMed

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  7. Understanding Periodicity as a Process with Gestalt Structure.

    ERIC Educational Resources Information Center

    Shama, Gilli

    1998-01-01

    Presents a two-phase investigation of how Israeli students understand the concept of periodicity. Discusses related research with teachers and students (N=895) employing both qualitative and quantitative research methodologies. Concludes that students understand periodicity as a process. Students' errors and preferences are discussed with…

  8. Structured FORTRAN Preprocessor

    NASA Technical Reports Server (NTRS)

    Flynn, J. A.; Lawson, C. L.; Van Snyder, W.; Tsitsivas, H. N.

    1985-01-01

    SFTRAN3 supports structured programing in FORTRAN environment. Language intended particularly to support two aspects of structured programing -- nestable single-entry control structures and modularization and top-down organization of code. Code designed and written using these SFTRAN3 facilities have fewer initial errors, easier to understand and less expensive to maintain and modify.

  9. Assessing uncertainty in SRTM elevations for global flood modelling

    NASA Astrophysics Data System (ADS)

    Hawker, L. P.; Rougier, J.; Neal, J. C.; Bates, P. D.

    2017-12-01

    The SRTM DEM is widely used as the topography input to flood models in data-sparse locations. Understanding spatial error in the SRTM product is crucial in constraining uncertainty about elevations and assessing the impact of these upon flood prediction. Assessment of SRTM error was carried out by Rodriguez et al (2006), but this did not explicitly quantify the spatial structure of vertical errors in the DEM, and nor did it distinguish between errors over different types of landscape. As a result, there is a lack of information about spatial structure of vertical errors of the SRTM in the landscape that matters most to flood models - the floodplain. Therefore, this study attempts this task by comparing SRTM, an error corrected SRTM product (The MERIT DEM of Yamazaki et al., 2017) and near truth LIDAR elevations for 3 deltaic floodplains (Mississippi, Po, Wax Lake) and a large lowland region (the Fens, UK). Using the error covariance function, calculated by comparing SRTM elevations to the near truth LIDAR, perturbations of the 90m SRTM DEM were generated, producing a catalogue of plausible DEMs. This allows modellers to simulate a suite of plausible DEMs at any aggregated block size above native SRTM resolution. Finally, the generated DEM's were input into a hydrodynamic model of the Mekong Delta, built using the LISFLOOD-FP hydrodynamic model, to assess how DEM error affects the hydrodynamics and inundation extent across the domain. The end product of this is an inundation map with the probability of each pixel being flooded based on the catalogue of DEMs. In a world of increasing computer power, but a lack of detailed datasets, this powerful approach can be used throughout natural hazard modelling to understand how errors in the SRTM DEM can impact the hazard assessment.

  10. Framework for Understanding Structural Errors (FUSE): A modular framework to diagnose differences between hydrological models

    USGS Publications Warehouse

    Clark, Martyn P.; Slater, Andrew G.; Rupp, David E.; Woods, Ross A.; Vrugt, Jasper A.; Gupta, Hoshin V.; Wagener, Thorsten; Hay, Lauren E.

    2008-01-01

    The problems of identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure remain outstanding research challenges for the discipline of hydrology. Progress on these problems requires understanding of the nature of differences between models. This paper presents a methodology to diagnose differences in hydrological model structures: the Framework for Understanding Structural Errors (FUSE). FUSE was used to construct 79 unique model structures by combining components of 4 existing hydrological models. These new models were used to simulate streamflow in two of the basins used in the Model Parameter Estimation Experiment (MOPEX): the Guadalupe River (Texas) and the French Broad River (North Carolina). Results show that the new models produced simulations of streamflow that were at least as good as the simulations produced by the models that participated in the MOPEX experiment. Our initial application of the FUSE method for the Guadalupe River exposed relationships between model structure and model performance, suggesting that the choice of model structure is just as important as the choice of model parameters. However, further work is needed to evaluate model simulations using multiple criteria to diagnose the relative importance of model structural differences in various climate regimes and to assess the amount of independent information in each of the models. This work will be crucial to both identifying the most appropriate model structure for a given problem and quantifying the uncertainty in model structure. To facilitate research on these problems, the FORTRAN‐90 source code for FUSE is available upon request from the lead author.

  11. Clinical Dental Faculty Members' Perceptions of Diagnostic Errors and How to Avoid Them.

    PubMed

    Nikdel, Cathy; Nikdel, Kian; Ibarra-Noriega, Ana; Kalenderian, Elsbeth; Walji, Muhammad F

    2018-04-01

    Diagnostic errors are increasingly recognized as a source of preventable harm in medicine, yet little is known about their occurrence in dentistry. The aim of this study was to gain a deeper understanding of clinical dental faculty members' perceptions of diagnostic errors, types of errors that may occur, and possible contributing factors. The authors conducted semi-structured interviews with ten domain experts at one U.S. dental school in May-August 2016 about their perceptions of diagnostic errors and their causes. The interviews were analyzed using an inductive process to identify themes and key findings. The results showed that the participants varied in their definitions of diagnostic errors. While all identified missed diagnosis and wrong diagnosis, only four participants perceived that a delay in diagnosis was a diagnostic error. Some participants perceived that an error occurs only when the choice of treatment leads to harm. Contributing factors associated with diagnostic errors included the knowledge and skills of the dentist, not taking adequate time, lack of communication among colleagues, and cognitive biases such as premature closure based on previous experience. Strategies suggested by the participants to prevent these errors were taking adequate time when investigating a case, forming study groups, increasing communication, and putting more emphasis on differential diagnosis. These interviews revealed differing perceptions of dental diagnostic errors among clinical dental faculty members. To address the variations, the authors recommend adopting shared language developed by the medical profession to increase understanding.

  12. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  13. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  14. What Happened, and Why: Toward an Understanding of Human Error Based on Automated Analyses of Incident Reports. Volume 1

    NASA Technical Reports Server (NTRS)

    Maille, Nicolas P.; Statler, Irving C.; Ferryman, Thomas A.; Rosenthal, Loren; Shafto, Michael G.; Statler, Irving C.

    2006-01-01

    The objective of the Aviation System Monitoring and Modeling (ASMM) project of NASA s Aviation Safety and Security Program was to develop technologies that will enable proactive management of safety risk, which entails identifying the precursor events and conditions that foreshadow most accidents. This presents a particular challenge in the aviation system where people are key components and human error is frequently cited as a major contributing factor or cause of incidents and accidents. In the aviation "world", information about what happened can be extracted from quantitative data sources, but the experiential account of the incident reporter is the best available source of information about why an incident happened. This report describes a conceptual model and an approach to automated analyses of textual data sources for the subjective perspective of the reporter of the incident to aid in understanding why an incident occurred. It explores a first-generation process for routinely searching large databases of textual reports of aviation incident or accidents, and reliably analyzing them for causal factors of human behavior (the why of an incident). We have defined a generic structure of information that is postulated to be a sound basis for defining similarities between aviation incidents. Based on this structure, we have introduced the simplifying structure, which we call the Scenario as a pragmatic guide for identifying similarities of what happened based on the objective parameters that define the Context and the Outcome of a Scenario. We believe that it will be possible to design an automated analysis process guided by the structure of the Scenario that will aid aviation-safety experts to understand the systemic issues that are conducive to human error.

  15. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    PubMed

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

  16. Understanding the nature of errors in nursing: using a model to analyse critical incident reports of errors which had resulted in an adverse or potentially adverse event.

    PubMed

    Meurier, C E

    2000-07-01

    Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.

  17. Using snowball sampling method with nurses to understand medication administration errors.

    PubMed

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.

  18. Predicted Errors In Children's Early Sentence Comprehension

    PubMed Central

    Gertner, Yael; Fisher, Cynthia

    2012-01-01

    Children use syntax to interpret sentences and learn verbs; this is syntactic bootstrapping. The structure-mapping account of early syntactic bootstrapping proposes that a partial representation of sentence structure, the set of nouns occurring with the verb, guides initial interpretation and provides an abstract format for new learning. This account predicts early successes, but also telltale errors: Toddlers should be unable to tell transitive sentences from other sentences containing two nouns. In testing this prediction, we capitalized on evidence that 21-month-olds use what they have learned about noun order in English sentences to understand new transitive verbs. In two experiments, 21-month-olds applied this noun-order knowledge to two-noun intransitive sentences, mistakenly assigning different interpretations to “The boy and the girl are gorping!” and “The girl and the boy are gorping!”. This suggests that toddlers exploit partial representations of sentence structure to guide sentence interpretation; these sparse representations are useful, but error-prone. PMID:22525312

  19. Understanding the large-scale structure from the cosmic microwave background: shear calibration with CMB lensing; gas physics from the kinematic Sunyaev-Zel'dovich effect

    NASA Astrophysics Data System (ADS)

    Schaan, Emmanuel

    2017-01-01

    I will present two promising ways in which the cosmic microwave background (CMB) sheds light on critical uncertain physics and systematics of the large-scale structure. Shear calibration with CMB lensing: Realizing the full potential of upcoming weak lensing surveys requires an exquisite understanding of the errors in galaxy shape estimation. In particular, such errors lead to a multiplicative bias in the shear, degenerate with the matter density parameter and the amplitude of fluctuations. Its redshift-evolution can hide the true evolution of the growth of structure, which probes dark energy and possible modifications to general relativity. I will show that CMB lensing from a stage 4 experiment (CMB S4) can self-calibrate the shear for an LSST-like optical lensing survey. This holds in the presence of photo-z errors and intrinsic alignment. Evidence for the kinematic Sunyaev-Zel'dovich (kSZ) effect; cluster energetics: Through the kSZ effect, the baryon momentum field is imprinted on the CMB. I will report significant evidence for the kSZ effect from ACTPol and peculiar velocities reconstructed from BOSS. I will present the prospects for constraining cluster gas profiles and energetics from the kSZ effect with SPT-3G, AdvACT and CMB S4. This will provide constraints on galaxy formation and feedback models.

  20. Error framing effects on performance: cognitive, motivational, and affective pathways.

    PubMed

    Steele-Johnson, Debra; Kalinoski, Zachary T

    2014-01-01

    Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.

  1. Ultrahigh-resolution mapping of peatland microform using ground-based structure from motion with multiview stereo

    NASA Astrophysics Data System (ADS)

    Mercer, Jason J.; Westbrook, Cherie J.

    2016-11-01

    Microform is important in understanding wetland functions and processes. But collecting imagery of and mapping the physical structure of peatlands is often expensive and requires specialized equipment. We assessed the utility of coupling computer vision-based structure from motion with multiview stereo photogrammetry (SfM-MVS) and ground-based photos to map peatland topography. The SfM-MVS technique was tested on an alpine peatland in Banff National Park, Canada, and guidance was provided on minimizing errors. We found that coupling SfM-MVS with ground-based photos taken with a point and shoot camera is a viable and competitive technique for generating ultrahigh-resolution elevations (i.e., <0.01 m, mean absolute error of 0.083 m). In evaluating 100+ viable SfM-MVS data collection and processing scenarios, vegetation was found to considerably influence accuracy. Vegetation class, when accounted for, reduced absolute error by as much as 50%. The logistic flexibility of ground-based SfM-MVS paired with its high resolution, low error, and low cost makes it a research area worth developing as well as a useful addition to the wetland scientists' toolkit.

  2. The genomic structure: proof of the role of non-coding DNA.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2006-01-01

    We prove that the introns play the role of a decoy in absorbing mutations in the same way hollow uninhabited structures are used by the military to protect important installations. Our approach is based on a probability of error analysis, where errors are mutations which occur in the exon sequences. We derive the optimal exon length distribution, which minimizes the probability of error in the genome. Furthermore, to understand how can Nature generate the optimal distribution, we propose a diffusive random walk model for exon generation throughout evolution. This model results in an alpha stable exon length distribution, which is asymptotically equivalent to the optimal distribution. Experimental results show that both distributions accurately fit the real data. Given that introns also drive biological evolution by increasing the rate of unequal crossover between genes, we conclude that the role of introns is to maintain a genius balance between stability and adaptability in eukaryotic genomes.

  3. Exploring Situational Awareness in Diagnostic Errors in Primary Care

    PubMed Central

    Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.

    2013-01-01

    Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757

  4. Small scale structure on cosmic strings

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1989-01-01

    The current understanding of cosmic string evolution is discussed, and the focus placed on the question of small scale structure on strings, where most of the disagreements lie. A physical picture designed to put the role of the small scale structure into more intuitive terms is presented. In this picture it can be seen how the small scale structure can feed back in a major way on the overall scaling solution. It is also argued that it is easy for small scale numerical errors to feed back in just such a way. The intuitive discussion presented here may form the basis for an analytic treatment of the small scale structure, which argued in any case would be extremely valuable in filling the gaps in the present understanding of cosmic string evolution.

  5. Development of X-ray laser media. Measurement of gain and development of cavity resonators for wavelengths near 130 angstroms, volume 3

    NASA Astrophysics Data System (ADS)

    Forsyth, J. M.

    1983-02-01

    In this document the authors summarize our investigation of the reflecting properties of X-ray multilayers. The breadth of this investigation indicates the utility of the difference equation formalism in the analysis of such structure. The formalism is particularly useful in analyzing multilayers whose structure is not a simple periodic bilayer. The complexity in structure can be either intentional, as in multilayers made by in-situ reflectance monitoring, or it can be a consequence of a degradation mechanism, such as random thickness errors or interlayer diffusion. Both the analysis of thickness errors and the analysis of interlayer diffusion are conceptually simple, effectively one dimensional problems that are straightforwared to pose. In the authors analysis of in-situ reflectance monitoring, they provide a quantitative understanding of an experimentally successful process that has not previously been treated theoretically. As X-ray multilayers come into wider use, there will undoubtedly be an increasing need for a more precise understanding of their reflecting properties. Thus, it is expected that in the future more detailed modeling will be undertaken of less easily specified structures than those above. The authors believe that their formalism will continue to prove useful in the modeling of these more complex structures. One such structure that may be of interest is that of a multilayer degraded by interfacial roughness.

  6. Improving estimation of flight altitude in wildlife telemetry studies

    USGS Publications Warehouse

    Poessel, Sharon; Duerr, Adam E.; Hall, Jonathan C.; Braham, Melissa A.; Katzner, Todd

    2018-01-01

    Altitude measurements from wildlife tracking devices, combined with elevation data, are commonly used to estimate the flight altitude of volant animals. However, these data often include measurement error. Understanding this error may improve estimation of flight altitude and benefit applied ecology.There are a number of different approaches that have been used to address this measurement error. These include filtering based on GPS data, filtering based on behaviour of the study species, and use of state-space models to correct measurement error. The effectiveness of these approaches is highly variable.Recent studies have based inference of flight altitude on misunderstandings about avian natural history and technical or analytical tools. In this Commentary, we discuss these misunderstandings and suggest alternative strategies both to resolve some of these issues and to improve estimation of flight altitude. These strategies also can be applied to other measures derived from telemetry data.Synthesis and applications. Our Commentary is intended to clarify and improve upon some of the assumptions made when estimating flight altitude and, more broadly, when using GPS telemetry data. We also suggest best practices for identifying flight behaviour, addressing GPS error, and using flight altitudes to estimate collision risk with anthropogenic structures. Addressing the issues we describe would help improve estimates of flight altitude and advance understanding of the treatment of error in wildlife telemetry studies.

  7. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  8. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions.

    PubMed

    Potter, Gail E; Smieszek, Timo; Sailer, Kerstin

    2015-09-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.

  9. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  10. First-principles energetics of water clusters and ice: A many-body analysis

    NASA Astrophysics Data System (ADS)

    Gillan, M. J.; Alfè, D.; Bartók, A. P.; Csányi, G.

    2013-12-01

    Standard forms of density-functional theory (DFT) have good predictive power for many materials, but are not yet fully satisfactory for cluster, solid, and liquid forms of water. Recent work has stressed the importance of DFT errors in describing dispersion, but we note that errors in other parts of the energy may also contribute. We obtain information about the nature of DFT errors by using a many-body separation of the total energy into its 1-body, 2-body, and beyond-2-body components to analyze the deficiencies of the popular PBE and BLYP approximations for the energetics of water clusters and ice structures. The errors of these approximations are computed by using accurate benchmark energies from the coupled-cluster technique of molecular quantum chemistry and from quantum Monte Carlo calculations. The systems studied are isomers of the water hexamer cluster, the crystal structures Ih, II, XV, and VIII of ice, and two clusters extracted from ice VIII. For the binding energies of these systems, we use the machine-learning technique of Gaussian Approximation Potentials to correct successively for 1-body and 2-body errors of the DFT approximations. We find that even after correction for these errors, substantial beyond-2-body errors remain. The characteristics of the 2-body and beyond-2-body errors of PBE are completely different from those of BLYP, but the errors of both approximations disfavor the close approach of non-hydrogen-bonded monomers. We note the possible relevance of our findings to the understanding of liquid water.

  11. Which strategy for a protein crystallization project?

    NASA Technical Reports Server (NTRS)

    Kundrot, C. E.

    2004-01-01

    The three-dimensional, atomic-resolution protein structures produced by X-ray crystallography over the past 50+ years have led to tremendous chemical understanding of fundamental biochemical processes. The pace of discovery in protein crystallography has increased greatly with advances in molecular biology, crystallization techniques, cryocrystallography, area detectors, synchrotrons and computing. While the methods used to produce single, well-ordered crystals have also evolved over the years in response to increased understanding and advancing technology, crystallization strategies continue to be rooted in trial-and-error approaches. This review summarizes the current approaches in protein crystallization and surveys the first results to emerge from the structural genomics efforts.

  12. Which Strategy for a Protein Crystallization Project?

    NASA Technical Reports Server (NTRS)

    Kundrot, Craig E.

    2003-01-01

    The three-dimensional, atomic-resolution protein structures produced by X-ray crystallography over the past 50+ years have led to tremendous chemical understanding of fundamental biochemical processes. The pace of discovery in protein crystallography has increased greatly with advances in molecular biology, crystallization techniques, cryo-crystallography, area detectors, synchrotrons and computing. While the methods used to produce single, well-ordered crystals have also evolved over the years in response to increased understanding and advancing technology, crystallization strategies continue to be rooted in trial-and-error approaches. This review summarizes the current approaches in protein crystallization and surveys the first results to emerge from the structural genomics efforts.

  13. Implementing technology to improve medication safety in healthcare facilities: a literature review.

    PubMed

    Hidle, Unn

    Medication errors remain one of the most common causes of patient injuries in the United States, with detrimental outcomes including adverse reactions and even death. By developing a better understanding of why and how medication errors occur, preventative measures may be implemented including technological advances. In this literature review, potential methods of reducing medication errors were explored. Furthermore, technology tools available for medication orders and administration are described, including advantages and disadvantages of each system. It was found that technology can be an excellent aid in improving safety of medication administration. However, computer technology cannot replace human intellect and intuition. Nurses should be involved when implementing any new computerized system in order to obtain the most appropriate and user-friendly structure.

  14. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  15. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  16. Geographic approaches to biodiversity conservation: implications of scale and error to landscape planning

    Treesearch

    Curtis H. Flather; Kenneth R. Wilson; Susan A. Shriner

    2009-01-01

    Conservation science is concerned with understanding why distribution and abundance patterns of species vary in time and space. Although these patterns have strong signatures tied to the availability of energy and nutrients, variation in climate, physiographic heterogeneity, and differences in the structural complexity of natural vegetation, it is becoming more...

  17. Computationally mapping sequence space to understand evolutionary protein engineering.

    PubMed

    Armstrong, Kathryn A; Tidor, Bruce

    2008-01-01

    Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures.

  18. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  19. Land surveys show regional variability of historical fire regimes and dry forest structure of the western United States.

    PubMed

    Baker, William L; Williams, Mark A

    2018-03-01

    An understanding of how historical fire and structure in dry forests (ponderosa pine, dry mixed conifer) varied across the western United States remains incomplete. Yet, fire strongly affects ecosystem services, and forest restoration programs are underway. We used General Land Office survey reconstructions from the late 1800s across 11 landscapes covering ~1.9 million ha in four states to analyze spatial variation in fire regimes and forest structure. We first synthesized the state of validation of our methods using 20 modern validations, 53 historical cross-validations, and corroborating evidence. These show our method creates accurate reconstructions with low errors. One independent modern test reported high error, but did not replicate our method and made many calculation errors. Using reconstructed parameters of historical fire regimes and forest structure from our validated methods, forests were found to be non-uniform across the 11 landscapes, but grouped together in three geographical areas. Each had a mixture of fire severities, but dominated by low-severity fire and low median tree density in Arizona, mixed-severity fire and intermediate to high median tree density in Oregon-California, and high-severity fire and intermediate median tree density in Colorado. Programs to restore fire and forest structure could benefit from regional frameworks, rather than one size fits all. © 2018 by the Ecological Society of America.

  20. Error, blame, and the law in health care--an antipodean perspective.

    PubMed

    Runciman, William B; Merry, Alan F; Tito, Fiona

    2003-06-17

    Patients are frequently harmed by problems arising from the health care process itself. Addressing these problems requires understanding the role of errors, violations, and system failures in their genesis. Problem-solving is inhibited by a tendency to blame those involved, often inappropriately. This has been aggravated by the need to attribute blame before compensation can be obtained through tort and the human failing of attributing blame simply because there has been a serious outcome. Blaming and punishing for errors that are made by well-intentioned people working in the health care system drives the problem of iatrogenic harm underground and alienates people who are best placed to prevent such problems from recurring. On the other hand, failure to assign blame when it is due is also undesirable and erodes trust in the medical profession. Understanding the distinction between blameworthy behavior and inevitable human errors and appreciating the systemic factors that underlie most failures in complex systems are essential for the response to a harmed patient to be informed, fair, and effective in improving safety. It is important to meet society's needs to blame and exact retribution when appropriate. However, this should not be a prerequisite for compensation, which should be appropriately structured, fair, timely, and, ideally, properly funded as an intrinsic part of health care and social security systems.

  1. Statistical Analysis of Big Data on Pharmacogenomics

    PubMed Central

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  2. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  3. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  4. Local neutral networks help maintain inaccurately replicating ribozymes.

    PubMed

    Szilágyi, András; Kun, Ádám; Szathmáry, Eörs

    2014-01-01

    The error threshold of replication limits the selectively maintainable genome size against recurrent deleterious mutations for most fitness landscapes. In the context of RNA replication a distinction between the genotypic and the phenotypic error threshold has been made; where the latter concerns the maintenance of secondary structure rather than sequence. RNA secondary structure is treated as a proxy for function. The phenotypic error threshold allows higher per digit mutation rates than its genotypic counterpart, and is known to increase with the frequency of neutral mutations in sequence space. Here we show that the degree of neutrality, i.e. the frequency of nearest-neighbour (one-step) neutral mutants is a remarkably accurate proxy for the overall frequency of such mutants in an experimentally verifiable formula for the phenotypic error threshold; this we achieve by the full numerical solution for the concentration of all sequences in mutation-selection balance up to length 16. We reinforce our previous result that currently known ribozymes could be selectively maintained by the accuracy known from the best available polymerase ribozymes. Furthermore, we show that in silico stabilizing selection can increase the mutational robustness of ribozymes due to the fact that they were produced by artificial directional selection in the first place. Our finding offers a better understanding of the error threshold and provides further insight into the plausibility of an ancient RNA world.

  5. Learning, memory, and the role of neural network architecture.

    PubMed

    Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M

    2011-06-01

    The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  6. A test of a linear model of glaucomatous structure-function loss reveals sources of variability in retinal nerve fiber and visual field measurements.

    PubMed

    Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H

    2009-09-01

    Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.

  7. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  8. Understanding EFL Students' Errors in Writing

    ERIC Educational Resources Information Center

    Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti

    2015-01-01

    Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…

  9. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    PubMed Central

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  10. Safeguarding the process of drug administration with an emphasis on electronic support tools

    PubMed Central

    Seidling, Hanna M; Lampert, Anette; Lohmann, Kristina; Schiele, Julia T; Send, Alexander J F; Witticke, Diana; Haefeli, Walter E

    2013-01-01

    Aims The aim of this work is to understand the process of drug administration and identify points in the workflow that resulted in interventions by clinical information systems in order to improve patient safety. Methods To identify a generic way to structure the drug administration process we performed peer-group discussions and supplemented these discussions with a literature search for studies reporting errors in drug administration and strategies for their prevention. Results We concluded that the drug administration process might consist of up to 11 sub-steps, which can be grouped into the four sub-processes of preparation, personalization, application and follow-up. Errors in drug handling and administration are diverse and frequent and in many cases not caused by the patient him/herself, but by family members or nurses. Accordingly, different prevention strategies have been set in place with relatively few approaches involving e-health technology. Conclusions A generic structuring of the administration process and particular error-prone sub-steps may facilitate the allocation of prevention strategies and help to identify research gaps. PMID:24007450

  11. Restoring method for missing data of spatial structural stress monitoring based on correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zeyu; Luo, Yaozhi

    2017-07-01

    Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.

  12. Understanding Problem-Solving Errors by Students with Learning Disabilities in Standards-Based and Traditional Curricula

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley

    2016-01-01

    Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…

  13. Missed opportunities for diagnosis: lessons learned from diagnostic errors in primary care.

    PubMed

    Goyder, Clare R; Jones, Caroline H D; Heneghan, Carl J; Thompson, Matthew J

    2015-12-01

    Because of the difficulties inherent in diagnosis in primary care, it is inevitable that diagnostic errors will occur. However, despite the important consequences associated with diagnostic errors and their estimated high prevalence, teaching and research on diagnostic error is a neglected area. To ascertain the key learning points from GPs' experiences of diagnostic errors and approaches to clinical decision making associated with these. Secondary analysis of 36 qualitative interviews with GPs in Oxfordshire, UK. Two datasets of semi-structured interviews were combined. Questions focused on GPs' experiences of diagnosis and diagnostic errors (or near misses) in routine primary care and out of hours. Interviews were audiorecorded, transcribed verbatim, and analysed thematically. Learning points include GPs' reliance on 'pattern recognition' and the failure of this strategy to identify atypical presentations; the importance of considering all potentially serious conditions using a 'restricted rule out' approach; and identifying and acting on a sense of unease. Strategies to help manage uncertainty in primary care were also discussed. Learning from previous examples of diagnostic errors is essential if these events are to be reduced in the future and this should be incorporated into GP training. At a practice level, learning points from experiences of diagnostic errors should be discussed more frequently; and more should be done to integrate these lessons nationally to understand and characterise diagnostic errors. © British Journal of General Practice 2015.

  14. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  15. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  16. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  17. How genetic errors in GPCRs affect their function: Possible therapeutic strategies

    PubMed Central

    Stoy, Henriette; Gurevich, Vsevolod V.

    2015-01-01

    Activating and inactivating mutations in numerous human G protein-coupled receptors (GPCRs) are associated with a wide range of disease phenotypes. Here we use several class A GPCRs with a particularly large set of identified disease-associated mutations, many of which were biochemically characterized, along with known GPCR structures and current models of GPCR activation, to understand the molecular mechanisms yielding pathological phenotypes. Based on this mechanistic understanding we also propose different therapeutic approaches, both conventional, using small molecule ligands, and novel, involving gene therapy. PMID:26229975

  18. Holistic approach for overlay and edge placement error to meet the 5nm technology node requirements

    NASA Astrophysics Data System (ADS)

    Mulkens, Jan; Slachter, Bram; Kubis, Michael; Tel, Wim; Hinnen, Paul; Maslow, Mark; Dillen, Harm; Ma, Eric; Chou, Kevin; Liu, Xuedong; Ren, Weiming; Hu, Xuerang; Wang, Fei; Liu, Kevin

    2018-03-01

    In this paper, we discuss the metrology methods and error budget that describe the edge placement error (EPE). EPE quantifies the pattern fidelity of a device structure made in a multi-patterning scheme. Here the pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. EPE is computed by combining optical and ebeam metrology data. We show that high NA optical scatterometer can be used to densely measure in device CD and overlay errors. Large field e-beam system enables massive CD metrology which is used to characterize the local CD error. Local CD distribution needs to be characterized beyond 6 sigma, and requires high throughput e-beam system. We present in this paper the first images of a multi-beam e-beam inspection system. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As a use case, we evaluated a 5-nm logic patterning process based on Self-Aligned-QuadruplePatterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography.

  19. Electrostatic Structure and Double-Probe Performance in Tenuous Plasmas

    NASA Astrophysics Data System (ADS)

    Cully, C. M.; Ergun, R. E.

    2006-12-01

    Many in-situ plasma instruments are affected by the local electrostatic structure surrounding the spacecraft. In order to better understand this structure, we have developed a fully 3-dimensional self-consistent model that uses realistic spacecraft geometry, including thin (<1 mm) wires and long (>100m) booms, with open boundary conditions. One of the more surprising results is that in tenuous plasmas, the charge on the booms can dominate over the charge on the spacecraft body. For instruments such as electric field double probes and boom-mounted low-energy particle detectors, this challenges the existing paradigm: long booms do not allow the probes to escape the spacecraft potential. Instead, the potential structure simply expands as the boom is deployed. We then apply our model to the double-probe Electric Field and Waves (EFW) instruments on Cluster, and predict the magnitudes of the main error sources. The overall error budget is consistent with experiment, and the model yields some additional interesting insights. We show that the charge in the photoelectron cloud is relatively unimportant, and that the spacecraft potential is typically underestimated by about 20% by double-probe experiments.

  20. A Systems Modeling Approach for Risk Management of Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2012-01-01

    The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

  1. Wide-field LOFAR-LBA power-spectra analyses: Impact of calibration, polarization leakage and ionosphere

    NASA Astrophysics Data System (ADS)

    Gehlot, Bharat K.; Koopmans, Léon V. E.

    2018-05-01

    Contamination due to foregrounds, calibration errors and ionospheric effects pose major challenges in detection of the cosmic 21 cm signal in various Epoch of Reionization (EoR) experiments. We present the results of a study of a field centered on 3C196 using LOFAR Low Band observations, where we quantify various wide field and calibration effects such as gain errors, polarized foregrounds, and ionospheric effects. We observe a `pitchfork' structure in the power spectrum of the polarized intensity in delay-baseline space, which leaks into the modes beyond the instrumental horizon. We show that this structure arises due to strong instrumental polarization leakage (~30%) towards Cas A which is far away from primary field of view. We measure a small ionospheric diffractive scale towards CasA resembling pure Kolmogorov turbulence. Our work provides insights in understanding the nature of aforementioned effects and mitigating them in future Cosmic Dawn observations.

  2. What do IPAQ questions mean to older adults? Lessons from cognitive interviews

    PubMed Central

    2010-01-01

    Background Most questionnaires used for physical activity (PA) surveillance have been developed for adults aged ≤65 years. Given the health benefits of PA for older adults and the aging of the population, it is important to include adults aged 65+ years in PA surveillance. However, few studies have examined how well older adults understand PA surveillance questionnaires. This study aimed to document older adults' understanding of questions from the International PA Questionnaire (IPAQ), which is used worldwide for PA surveillance. Methods Participants were 41 community-dwelling adults aged 65-89 years. They each completed IPAQ in a face-to-face semi-structured interview, using the "think-aloud" method, in which they expressed their thoughts out loud as they answered IPAQ questions. Interviews were transcribed and coded according to a three-stage model: understanding the intent of the question; performing the primary task (conducting the mental operations required to formulate a response); and response formatting (mapping the response into pre-specified response options). Results Most difficulties occurred during the understanding and performing the primary task stages. Errors included recalling PA in an "average" week, not in the previous 7 days; including PA lasting <10 minutes/session; reporting the same PA twice or thrice; and including the total time of an activity for which only a part of that time was at the intensity specified in the question. Participants were unclear what activities fitted within a question's scope and used a variety of strategies for determining the frequency and duration of their activities. Participants experienced more difficulties with the moderate-intensity PA and walking questions than with the vigorous-intensity PA questions. The sitting time question, particularly difficult for many participants, required the use of an answer strategy different from that used to answer questions about PA. Conclusions These findings indicate a need for caution in administering IPAQ to adults aged ≥65 years. Most errors resulted in over-reporting, although errors resulting in under-reporting were also noted. Given the nature of the errors made by participants, it is possible that similar errors occur when IPAQ is used in younger populations and that the errors identified could be minimized with small modifications to IPAQ. PMID:20459758

  3. Predicting the thermal/structural performance of the atmospheric trace molecules spectroscopy /ATMOS/ Fourier transform spectrometer

    NASA Technical Reports Server (NTRS)

    Miller, J. M.

    1980-01-01

    ATMOS is a Fourier transform spectrometer to measure atmospheric trace molecules over a spectral range of 2-16 microns. Assessment of the system performance of ATMOS includes evaluations of optical system errors induced by thermal and structural effects. In order to assess the optical system errors induced from thermal and structural effects, error budgets are assembled during system engineering tasks and line of sight and wavefront deformations predictions (using operational thermal and vibration environments and computer models) are subsequently compared to the error budgets. This paper discusses the thermal/structural error budgets, modelling and analysis methods used to predict thermal/structural induced errors and the comparisons that show that predictions are within the error budgets.

  4. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  5. Diffusion Tensor Tractography Reveals Disrupted Structural Connectivity during Brain Aging

    NASA Astrophysics Data System (ADS)

    Lin, Lan; Tian, Miao; Wang, Qi; Wu, Shuicai

    2017-10-01

    Brain aging is one of the most crucial biological processes that entail many physical, biological, chemical, and psychological changes, and also a major risk factor for most common neurodegenerative diseases. To improve the quality of life for the elderly, it is important to understand how the brain is changed during the normal aging process. We compared diffusion tensor imaging (DTI)-based brain networks in a cohort of 75 healthy old subjects by using graph theory metrics to describe the anatomical networks and connectivity patterns, and network-based statistic (NBS) analysis was used to identify pairs of regions with altered structural connectivity. The NBS analysis revealed a significant network comprising nine distinct fiber bundles linking 10 different brain regions showed altered white matter structures in young-old group compare with middle-aged group (p < .05, family-wise error-corrected). Our results might guide future studies and help to gain a better understanding of brain aging.

  6. Being a Victim of Medical Error in Brazil: An (Un)Real Dilemma

    PubMed Central

    Mendonça, Vitor Silva; Custódio, Eda Marconi

    2016-01-01

    Medical error stems from inadequate professional conduct that is capable of producing harm to life or exacerbating the health of another, whether through act or omission. This situation has become increasingly common in Brazil and worldwide. In this study, the aim was to understand what being the victim of medical error is like and to investigate the circumstances imposed on this condition of victims in Brazil. A semi-structured interview was conducted with twelve people who had gone through situations of medical error in their lives, creating a space for narratives of their experiences and deep reflection on the phenomenon. The concept of medical error has a negative connotation, often being associated with the incompetence of a medical professional. Medical error in Brazil is demonstrated by low-quality professional performance and represents the current reality of the country because of the common lack of respect and consideration for patients. Victims often remark on their loss of identity, as their social functions have been interrupted and they do not expect to regain such. It was found, however, little assumption of error in the involved doctors’ discourses and attitudes, which felt a need to judge the medical conduct in an attempt to assert their rights. Medical error in Brazil presents a punitive character and is little discussed in medical and scientific circles. The stigma of medical error is closely connected to the value and cultural judgments of the country, making it difficult to accept, both by victims and professionals. PMID:27403461

  7. Cerebellar and prefrontal cortex contributions to adaptation, strategies, and reinforcement learning.

    PubMed

    Taylor, Jordan A; Ivry, Richard B

    2014-01-01

    Traditionally, motor learning has been studied as an implicit learning process, one in which movement errors are used to improve performance in a continuous, gradual manner. The cerebellum figures prominently in this literature given well-established ideas about the role of this system in error-based learning and the production of automatized skills. Recent developments have brought into focus the relevance of multiple learning mechanisms for sensorimotor learning. These include processes involving repetition, reinforcement learning, and strategy utilization. We examine these developments, considering their implications for understanding cerebellar function and how this structure interacts with other neural systems to support motor learning. Converging lines of evidence from behavioral, computational, and neuropsychological studies suggest a fundamental distinction between processes that use error information to improve action execution or action selection. While the cerebellum is clearly linked to the former, its role in the latter remains an open question. © 2014 Elsevier B.V. All rights reserved.

  8. Cerebellar and Prefrontal Cortex Contributions to Adaptation, Strategies, and Reinforcement Learning

    PubMed Central

    Taylor, Jordan A.; Ivry, Richard B.

    2014-01-01

    Traditionally, motor learning has been studied as an implicit learning process, one in which movement errors are used to improve performance in a continuous, gradual manner. The cerebellum figures prominently in this literature given well-established ideas about the role of this system in error-based learning and the production of automatized skills. Recent developments have brought into focus the relevance of multiple learning mechanisms for sensorimotor learning. These include processes involving repetition, reinforcement learning, and strategy utilization. We examine these developments, considering their implications for understanding cerebellar function and how this structure interacts with other neural systems to support motor learning. Converging lines of evidence from behavioral, computational, and neuropsychological studies suggest a fundamental distinction between processes that use error information to improve action execution or action selection. While the cerebellum is clearly linked to the former, its role in the latter remains an open question. PMID:24916295

  9. Cause-and-effect mapping of critical events.

    PubMed

    Graves, Krisanne; Simmons, Debora; Galley, Mark D

    2010-06-01

    Health care errors are routinely reported in the scientific and public press and have become a major concern for most Americans. In learning to identify and analyze errors health care can develop some of the skills of a learning organization, including the concept of systems thinking. Modern experts in improving quality have been working in other high-risk industries since the 1920s making structured organizational changes through various frameworks for quality methods including continuous quality improvement and total quality management. When using these tools, it is important to understand systems thinking and the concept of processes within organization. Within these frameworks of improvement, several tools can be used in the analysis of errors. This article introduces a robust tool with a broad analytical view consistent with systems thinking, called CauseMapping (ThinkReliability, Houston, TX, USA), which can be used to systematically analyze the process and the problem at the same time. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  11. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  12. Foot Structure in Japanese Speech Errors: Normal vs. Pathological

    ERIC Educational Resources Information Center

    Miyakoda, Haruko

    2008-01-01

    Although many studies of speech errors have been presented in the literature, most have focused on errors occurring at either the segmental or feature level. Few, if any, studies have dealt with the prosodic structure of errors. This paper aims to fill this gap by taking up the issue of prosodic structure in Japanese speech errors, with a focus on…

  13. Operational Data Reduction Procedure for Determining Density and Vertical Structure of the Martian Upper Atmosphere from Mars Global Surveyor Accelerometer Measurements

    NASA Technical Reports Server (NTRS)

    Cancro, George J.; Tolson, Robert H.; Keating, Gerald M.

    1998-01-01

    The success of aerobraking by the Mars Global Surveyor (MGS) spacecraft was partly due to the analysis of MGS accelerometer data. Accelerometer data was used to determine the effect of the atmosphere on each orbit, to characterize the nature of the atmosphere, and to predict the atmosphere for future orbits. To interpret the accelerometer data, a data reduction procedure was developed to produce density estimations utilizing inputs from the spacecraft, the Navigation Team, and pre-mission aerothermodynamic studies. This data reduction procedure was based on the calculation of aerodynamic forces from the accelerometer data by considering acceleration due to gravity gradient, solar pressure, angular motion of the MGS, instrument bias, thruster activity, and a vibration component due to the motion of the damaged solar array. Methods were developed to calculate all of the acceleration components including a 4 degree of freedom dynamics model used to gain a greater understanding of the damaged solar array. The total error inherent to the data reduction procedure was calculated as a function of altitude and density considering contributions from ephemeris errors, errors in force coefficient, and instrument errors due to bias and digitization. Comparing the results from this procedure to the data of other MGS Teams has demonstrated that this procedure can quickly and accurately describe the density and vertical structure of the Martian upper atmosphere.

  14. Broadening our understanding of clinical quality: from attribution error to situated cognition.

    PubMed

    Artino, A R; Durning, S J; Waechter, D M; Leary, K L; Gilliland, W R

    2012-02-01

    The tendency to overestimate the influence of personal characteristics on outcomes, and to underestimate the influence of situational factors, is known as the fundamental attribution error. We argue that medical-education researchers and policy makers may be guilty of this error in their quest to understand clinical quality. We suggest that to truly understand clinical quality, they must examine situational factors, which often have a strong influence on the quality of clinical encounters.

  15. An organizational approach to understanding patient safety and medical errors.

    PubMed

    Kaissi, Amer

    2006-01-01

    Progress in patient safety, or lack thereof, is a cause for great concern. In this article, we argue that the patient safety movement has failed to reach its goals of eradicating or, at least, significantly reducing errors because of an inappropriate focus on provider and patient-level factors with no real attention to the organizational factors that affect patient safety. We describe an organizational approach to patient safety using different organizational theory perspectives and make several propositions to push patient safety research and practice in a direction that is more likely to improve care processes and outcomes. From a Contingency Theory perspective, we suggest that health care organizations, in general, operate under a misfit between contingencies and structures. This misfit is mainly due to lack of flexibility, cost containment, and lack of regulations, thus explaining the high level of errors committed in these organizations. From an organizational culture perspective, we argue that health care organizations must change their assumptions, beliefs, values, and artifacts to change their culture from a culture of blame to a culture of safety and thus reduce medical errors. From an organizational learning perspective, we discuss how reporting, analyzing, and acting on error information can result in reduced errors in health care organizations.

  16. Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.

    PubMed

    Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z

    2012-07-01

    Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.

  17. Boosted ARTMAP: modifications to fuzzy ARTMAP motivated by boosting theory.

    PubMed

    Verzi, Stephen J; Heileman, Gregory L; Georgiopoulos, Michael

    2006-05-01

    In this paper, several modifications to the Fuzzy ARTMAP neural network architecture are proposed for conducting classification in complex, possibly noisy, environments. The goal of these modifications is to improve upon the generalization performance of Fuzzy ART-based neural networks, such as Fuzzy ARTMAP, in these situations. One of the major difficulties of employing Fuzzy ARTMAP on such learning problems involves over-fitting of the training data. Structural risk minimization is a machine-learning framework that addresses the issue of over-fitting by providing a backbone for analysis as well as an impetus for the design of better learning algorithms. The theory of structural risk minimization reveals a trade-off between training error and classifier complexity in reducing generalization error, which will be exploited in the learning algorithms proposed in this paper. Boosted ART extends Fuzzy ART by allowing the spatial extent of each cluster formed to be adjusted independently. Boosted ARTMAP generalizes upon Fuzzy ARTMAP by allowing non-zero training error in an effort to reduce the hypothesis complexity and hence improve overall generalization performance. Although Boosted ARTMAP is strictly speaking not a boosting algorithm, the changes it encompasses were motivated by the goals that one strives to achieve when employing boosting. Boosted ARTMAP is an on-line learner, it does not require excessive parameter tuning to operate, and it reduces precisely to Fuzzy ARTMAP for particular parameter values. Another architecture described in this paper is Structural Boosted ARTMAP, which uses both Boosted ART and Boosted ARTMAP to perform structural risk minimization learning. Structural Boosted ARTMAP will allow comparison of the capabilities of off-line versus on-line learning as well as empirical risk minimization versus structural risk minimization using Fuzzy ARTMAP-based neural network architectures. Both empirical and theoretical results are presented to enhance the understanding of these architectures.

  18. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  19. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  20. Exploring behavioural determinants relating to health professional reporting of medication errors: a qualitative study using the Theoretical Domains Framework.

    PubMed

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-07-01

    Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.

  1. Exploring the Effectiveness of a Measurement Error Tutorial in Helping Teachers Understand Score Report Results

    ERIC Educational Resources Information Center

    Zapata-Rivera, Diego; Zwick, Rebecca; Vezzu, Margaret

    2016-01-01

    The goal of this study was to explore the effectiveness of a short web-based tutorial in helping teachers to better understand the portrayal of measurement error in test score reports. The short video tutorial included both verbal and graphical representations of measurement error. Results showed a significant difference in comprehension scores…

  2. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  3. Medication reconciliation accuracy and patient understanding of intended medication changes on hospital discharge.

    PubMed

    Ziaeian, Boback; Araujo, Katy L B; Van Ness, Peter H; Horwitz, Leora I

    2012-11-01

    Adverse drug events after hospital discharge are common and often serious. These events may result from provider errors or patient misunderstanding. To determine the prevalence of medication reconciliation errors and patient misunderstanding of discharge medications. Prospective cohort study Patients over 64 years of age admitted with heart failure, acute coronary syndrome or pneumonia and discharged to home. We assessed medication reconciliation accuracy by comparing admission to discharge medication lists and reviewing charts to resolve discrepancies. Medication reconciliation changes that did not appear intentional were classified as suspected provider errors. We assessed patient understanding of intended medication changes through post-discharge interviews. Understanding was scored as full, partial or absent. We tested the association of relevance of the medication to the primary diagnosis with medication accuracy and with patient understanding, accounting for patient demographics, medical team and primary diagnosis. A total of 377 patients were enrolled in the study. A total of 565/2534 (22.3 %) of admission medications were redosed or stopped at discharge. Of these, 137 (24.2 %) were classified as suspected provider errors. Excluding suspected errors, patients had no understanding of 142/205 (69.3 %) of redosed medications, 182/223 (81.6 %) of stopped medications, and 493 (62.0 %) of new medications. Altogether, 307 patients (81.4 %) either experienced a provider error, or had no understanding of at least one intended medication change. Providers were significantly more likely to make an error on a medication unrelated to the primary diagnosis than on a medication related to the primary diagnosis (odds ratio (OR) 4.56, 95 % confidence interval (CI) 2.65, 7.85, p<0.001). Patients were also significantly more likely to misunderstand medication changes unrelated to the primary diagnosis (OR 2.45, 95 % CI 1.68, 3.55), p<0.001). Medication reconciliation and patient understanding are inadequate in older patients post-discharge. Errors and misunderstandings are particularly common in medications unrelated to the primary diagnosis. Efforts to improve medication reconciliation and patient understanding should not be disease-specific, but should be focused on the whole patient.

  4. 4f fine-structure levels as the dominant error in the electronic structures of binary lanthanide oxides.

    PubMed

    Huang, Bolong

    2016-04-05

    The ground-state 4f fine-structure levels in the intrinsic optical transition gaps between the 2p and 5d orbitals of lanthanide sesquioxides (Ln2 O3 , Ln = La…Lu) were calculated by a two-way crossover search for the U parameters for DFT + U calculations. The original 4f-shell potential perturbation in the linear response method were reformulated within the constraint volume of the given solids. The band structures were also calculated. This method yields nearly constant optical transition gaps between Ln-5d and O-2p orbitals, with magnitudes of 5.3 to 5.5 eV. This result verifies that the error in the band structure calculations for Ln2 O3 is dominated by the inaccuracies in the predicted 4f levels in the 2p-5d transition gaps, which strongly and non-linearly depend on the on-site Hubbard U. The relationship between the 4f occupancies and Hubbard U is non-monotonic and is entirely different from that for materials with 3d or 4d orbitals, such as transition metal oxides. This new linear response DFT + U method can provide a simpler understanding of the electronic structure of Ln2 O3 and enables a quick examination of the electronic structures of lanthanide solids before hybrid functional or GW calculations. © 2015 Wiley Periodicals, Inc.

  5. Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students

    PubMed Central

    Malone, Amelia S.; Fuchs, Lynn S.

    2016-01-01

    The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of committing errors. Students (n = 227) completed a 9-item ordering test. A high proportion (81%) of problems were completed incorrectly. Most (65% of) errors were due to students misapplying whole number logic to fractions. Fraction-magnitude estimation skill, but not part-whole understanding, significantly predicted the probability of committing this type of error. Implications for practice are discussed. PMID:26966153

  6. Analyzing students’ errors on fractions in the number line

    NASA Astrophysics Data System (ADS)

    Widodo, S.; Ikhwanudin, T.

    2018-05-01

    The objectives of this study are to know the type of students’ errors when they deal with fractions on the number line. This study used qualitative with a descriptive method, and involved 31 sixth grade students at one of the primary schools in Purwakarta, Indonesia. The results of this study are as follow, there are four types of student’s errors: unit confusion, tick mark interpretation error, partitioning and un partitioning error, and estimation error. We recommend that teachers should: strengthen unit understanding to the students when studying fractions, make students understand about tick mark interpretation, remind student of the importance of partitioning and un-partitioning strategy and teaches effective estimation strategies.

  7. Structural features that predict real-value fluctuations of globular proteins.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2012-05-01

    It is crucial to consider dynamics for understanding the biological function of proteins. We used a large number of molecular dynamics (MD) trajectories of nonhomologous proteins as references and examined static structural features of proteins that are most relevant to fluctuations. We examined correlation of individual structural features with fluctuations and further investigated effective combinations of features for predicting the real value of residue fluctuations using the support vector regression (SVR). It was found that some structural features have higher correlation than crystallographic B-factors with fluctuations observed in MD trajectories. Moreover, SVR that uses combinations of static structural features showed accurate prediction of fluctuations with an average Pearson's correlation coefficient of 0.669 and a root mean square error of 1.04 Å. This correlation coefficient is higher than the one observed in predictions by the Gaussian network model (GNM). An advantage of the developed method over the GNMs is that the former predicts the real value of fluctuation. The results help improve our understanding of relationships between protein structure and fluctuation. Furthermore, the developed method provides a convienient practial way to predict fluctuations of proteins using easily computed static structural features of proteins. Copyright © 2012 Wiley Periodicals, Inc.

  8. Structural features that predict real-value fluctuations of globular proteins

    PubMed Central

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2012-01-01

    It is crucial to consider dynamics for understanding the biological function of proteins. We used a large number of molecular dynamics trajectories of non-homologous proteins as references and examined static structural features of proteins that are most relevant to fluctuations. We examined correlation of individual structural features with fluctuations and further investigated effective combinations of features for predicting the real-value of residue fluctuations using the support vector regression. It was found that some structural features have higher correlation than crystallographic B-factors with fluctuations observed in molecular dynamics trajectories. Moreover, support vector regression that uses combinations of static structural features showed accurate prediction of fluctuations with an average Pearson’s correlation coefficient of 0.669 and a root mean square error of 1.04 Å. This correlation coefficient is higher than the one observed for the prediction by the Gaussian network model. An advantage of the developed method over the Gaussian network models is that the former predicts the real-value of fluctuation. The results help improve our understanding of relationships between protein structure and fluctuation. Furthermore, the developed method provides a convienient practial way to predict fluctuations of proteins using easily computed static structural features of proteins. PMID:22328193

  9. Crawling the Cosmic Web: An Exploration of Filamentary Structure

    NASA Astrophysics Data System (ADS)

    Bond, Nicholas A.; Strauss, M. A.; Cen, R.

    2006-12-01

    By analyzing the smoothed density field and its derivatives on a variety of scales, we can select strands from the cosmic web in a way which is consistent with our common sense understanding of a "filament". We present results from a twoand three-dimensional filament finder, run on both CDM simulations and a section of the SDSS spectroscopic sample. In both data sets, we will analyze the length and width distribution of filamentary structure and discuss its relation to galaxy clusters. Sources of contamination and error, such as "fingers of god", will also be addressed.

  10. Protein structure estimation from NMR data by matrix completion.

    PubMed

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  11. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  12. Medicine and aviation: a review of the comparison.

    PubMed

    Randell, R

    2003-01-01

    This paper aims to understand the nature of medical error in highly technological environments and argues that a comparison with aviation can blur its real understanding. This study is a comparative study between the notion of error in health care and aviation based on the author's own ethnographic study in intensive care units and findings from the research literature on errors in aviation. Failures in the use of medical technology are common. In attempts to understand the area of medical error, much attention has focused on how we can learn from aviation. This paper argues that such a comparison is not always useful, on the basis that (i) the type of work and technology is very different in the two domains; (ii) different issues are involved in training and procurement; and (iii) attitudes to error vary between the domains. Therefore, it is necessary to look closely at the subject of medical error and resolve those questions left unanswered by the lessons of aviation.

  13. Structured inspection of medications carried and stored by emergency medical services agencies identifies practices that may lead to medication errors.

    PubMed

    Kupas, Douglas F; Shayhorn, Meghan A; Green, Paul; Payton, Thomas F

    2012-01-01

    Medications are essential to emergency medical services (EMS) agencies when providing lifesaving care, but the EMS environment has challenges related to safe medication storage when compared with a hospital setting. We developed a structured process, based on common pharmacy practices, to review medications carried by EMS agencies to identify situations that may lead to medication error and to determine some best practices that may reduce potential errors and the risk of patient harm. To provide a descriptive account of EMS practices related to carrying and storing medications that have the potential for causing a medication administration error or patient harm. Using a structured process for inspection, an emergency medicine pharmacist and emergency physician(s) reviewed the medication carrying and storage practices of all nine advanced life support ambulance agencies within a five-county EMS region. Each medication carried and stored by the EMS agency was inspected for predetermined and spontaneously observed issues that could lead to medication error. These issues were documented and photographed. Two EMS medical directors reviewed each potential error for the risk of producing patient harm and assigned each to a category of high, moderate, or low risk. Because issues of temperature on EMS medications have been addressed elsewhere, this study concentrated on potential for EMS medication administration errors exclusive of storage temperatures. When reviewing medications carried by the nine EMS agencies, 38 medication safety issues were identified (range 1 to 8 per EMS agency). Of these, 16 were considered to be high risk, 14 moderate risk, and eight low risk for patient harm. Examples of potential issues included carrying expired medications, container-labeling issues, different medications stored in look-alike vials or prefilled syringes in the same compartment, and carrying crystalloid solutions next to solutions premixed with a medication. When reviewing medications stored at the EMS agency stations, eight safety issues were identified (range from 0 to 4 per station), including five moderate-risk and three low-risk issues. No agency had any high-risk medication issues related to storage of medication stock in the station. We observed potential medication safety issues related to how medications are carried and stored at all nine EMS agencies in a five-county region. Understanding these issues may assist EMS agencies in reducing the potential for a medication error and risk of patient harm. More research is needed to determine whether following these suggested best practices for carrying medications on EMS vehicles actually reduces errors in medication administration by EMS providers or decreases patient harm.

  14. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  15. Local concurrent error detection and correction in data structures using virtual backpointers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.C.J.; Chen, P.P.; Fuchs, W.K.

    1989-11-01

    A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.

  16. The influence of the structure and culture of medical group practices on prescription drug errors.

    PubMed

    Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer

    2005-08-01

    This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.

  17. After the Medication Error: Recent Nursing Graduates' Reflections on Adequacy of Education.

    PubMed

    Treiber, Linda A; Jones, Jackie H

    2018-05-01

    The purpose of this study was to better understand individual- and system-level factors surrounding making a medication error from the perspective of recent Bachelor of Science in Nursing graduates. Online survey mixed-methods items included perceptions of adequacy of preparatory nursing education, contributory variables, emotional responses, and treatment by employer following the error. Of the 168 respondents, 55% had made a medication error. Errors resulted from inexperience, rushing, technology, staffing, and patient acuity. Twenty-four percent did not report their errors. Key themes for improving education included more practice in varied clinical areas, intensive pharmacological preparation, practical instruction in functioning within the health care environment, and coping after making medication errors. Errors generally caused emotional distress in the error maker. Overall, perceived treatment after the error reflected supportive environments, where nurses were generally treated with respect, fair treatment, and understanding. Opportunities for nursing education include second victim awareness and reinforcing professional practice standards. [J Nurs Educ. 2018;57(5):275-280.]. Copyright 2018, SLACK Incorporated.

  18. Effects of skilled nursing facility structure and process factors on medication errors during nursing home admission.

    PubMed

    Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M

    2014-01-01

    Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.

  19. Use of Single-Cysteine Variants for Trapping Transient States in DNA Mismatch Repair.

    PubMed

    Friedhoff, Peter; Manelyte, Laura; Giron-Monzon, Luis; Winkler, Ines; Groothuizen, Flora S; Sixma, Titia K

    2017-01-01

    DNA mismatch repair (MMR) is necessary to prevent incorporation of polymerase errors into the newly synthesized DNA strand, as they would be mutagenic. In humans, errors in MMR cause a predisposition to cancer, called Lynch syndrome. The MMR process is performed by a set of ATPases that transmit, validate, and couple information to identify which DNA strand requires repair. To understand the individual steps in the repair process, it is useful to be able to study these large molecular machines structurally and functionally. However, the steps and states are highly transient; therefore, the methods to capture and enrich them are essential. Here, we describe how single-cysteine variants can be used for specific cross-linking and labeling approaches that allow trapping of relevant transient states. Analysis of these defined states in functional and structural studies is instrumental to elucidate the molecular mechanism of this important DNA MMR process. © 2017 Elsevier Inc. All rights reserved.

  20. Dealing with Beam Structure in PIXIE

    NASA Technical Reports Server (NTRS)

    Fixsen, D. J.; Kogut, Alan; Hill, Robert S.; Nagler, Peter C.; Seals, Lenward T., III; Howard, Joseph M.

    2016-01-01

    Measuring the B-mode polarization of the CMB radiation requires a detailed understanding of the projection of the detector onto the sky. We show how the combination of scan strategy and processing generates a cylindrical beam for the spectrum measurement. Both the instrumental design and the scan strategy reduce the cross coupling between the temperature variations and the B-modes. As with other polarization measurements some post processing may be required to eliminate residual errors.

  1. Being an honest broker of hydrology: Uncovering, communicating and addressing model error in a climate change streamflow dataset

    NASA Astrophysics Data System (ADS)

    Chegwidden, O.; Nijssen, B.; Pytlak, E.

    2017-12-01

    Any model simulation has errors, including errors in meteorological data, process understanding, model structure, and model parameters. These errors may express themselves as bias, timing lags, and differences in sensitivity between the model and the physical world. The evaluation and handling of these errors can greatly affect the legitimacy, validity and usefulness of the resulting scientific product. In this presentation we will discuss a case study of handling and communicating model errors during the development of a hydrologic climate change dataset for the Pacific Northwestern United States. The dataset was the result of a four-year collaboration between the University of Washington, Oregon State University, the Bonneville Power Administration, the United States Army Corps of Engineers and the Bureau of Reclamation. Along the way, the partnership facilitated the discovery of multiple systematic errors in the streamflow dataset. Through an iterative review process, some of those errors could be resolved. For the errors that remained, honest communication of the shortcomings promoted the dataset's legitimacy. Thoroughly explaining errors also improved ways in which the dataset would be used in follow-on impact studies. Finally, we will discuss the development of the "streamflow bias-correction" step often applied to climate change datasets that will be used in impact modeling contexts. We will describe the development of a series of bias-correction techniques through close collaboration among universities and stakeholders. Through that process, both universities and stakeholders learned about the others' expectations and workflows. This mutual learning process allowed for the development of methods that accommodated the stakeholders' specific engineering requirements. The iterative revision process also produced a functional and actionable dataset while preserving its scientific merit. We will describe how encountering earlier techniques' pitfalls allowed us to develop improved methods for scientists and practitioners alike.

  2. The effect of directive tutor guidance on students' conceptual understanding of statistics in problem-based learning.

    PubMed

    Budé, Luc; van de Wiel, Margaretha W J; Imbos, Tjaart; Berger, Martijn P F

    2011-06-01

    Education is aimed at students reaching conceptual understanding of the subject matter, because this leads to better performance and application of knowledge. Conceptual understanding depends on coherent and error-free knowledge structures. The construction of such knowledge structures can only be accomplished through active learning and when new knowledge can be integrated into prior knowledge. The intervention in this study was directed at both the activation of students as well as the integration of knowledge. Undergraduate university students from an introductory statistics course, in an authentic problem-based learning (PBL) environment, were randomly assigned to conditions and measurement time points. In the PBL tutorial meetings, half of the tutors guided the discussions of the students in a traditional way. The other half guided the discussions more actively by asking directive and activating questions. To gauge conceptual understanding, the students answered open-ended questions asking them to explain and relate important statistical concepts. Results of the quantitative analysis show that providing directive tutor guidance improved understanding. Qualitative data of students' misconceptions seem to support this finding. Long-term retention of the subject matter seemed to be inadequate. ©2010 The British Psychological Society.

  3. First order error corrections in common introductory physics experiments

    NASA Astrophysics Data System (ADS)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  4. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  5. Whose statistical reasoning is facilitated by a causal structure intervention?

    PubMed

    McNair, Simon; Feeney, Aidan

    2015-02-01

    People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430-450, 2007) proposed that a causal Bayesian framework accounts for peoples' errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics.

  6. Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.

    2000-01-01

    To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.

  7. The investigation on mirrors maladjustment for RLG

    NASA Astrophysics Data System (ADS)

    He, Xiao-qing; Gao, Ai-hua; Hu, Shang-bin; Lu, Zhi-guo

    2011-06-01

    In order to meet the high demand of the entire technology processing, the error compensation method is usually used to correct them and is premised on a good understanding of error sources and the law of the errors. In this paper, based on the theories of Collins's Integral and Collins's EIKONAL Function and the MATLAB software, we simulated and calculated the spatial distribution of optical beam in the cavity of the ring laser gyro under the resonator's maladjustment caused by the technology processing. From the simulation results, we can get that to the small-gain lasers, the same amount of disorders in the different structures have different effects on the spatial distribution of the beam, and the structures using the spherical mirrors relatively have the small impact on the beam; under the same disorder in the same cavity shape, the signal light and the calibration light which are respectively detected from the mirror M1 and M4 are different; under the same structures, different mirrors with the same amount of disorder will cause the different beat frequency difference; because of the disorders, the spot centers of clockwise and counterclockwise waves happen shift and will seriously affect the normal operation of the laser gyro if the imbalance reaches a certain degree. This work has a guiding role in the mirror adjustment of the laser gyros' technology processing, and has a reference value to the survival rate of the laser gyros and the improvement of measurement accuracy.

  8. Use of a low-literacy written action plan to improve parent understanding of pediatric asthma management: A randomized controlled study.

    PubMed

    Yin, Hsiang Shonna; Gupta, Ruchi S; Mendelsohn, Alan L; Dreyer, Benard; van Schaick, Linda; Brown, Christina R; Encalada, Karen; Sanchez, Dayana C; Warren, Christopher M; Tomopoulos, Suzy

    2017-11-01

    The objective of the study was to determine whether parents who use a low-literacy, pictogram- and photograph-based written asthma action plan (WAAP) have a better understanding of child asthma management compared to parents using a standard plan. A randomized controlled study was carried out in 2 urban pediatric outpatient clinics. Inclusion criteria were English- and Spanish-speaking parents of 2- to 12-year-old asthmatic children. Parents were randomized to receive a low-literacy or standard asthma action plan (American Academy of Allergy, Asthma and Immunology) for a hypothetical patient on controller and rescue medications. A structured questionnaire was used to assess whether there was an error in knowledge of (1) medications to give everyday and when sick, (2) need for spacer use, and (3) appropriate emergency response to give albuterol and seek medical help. Multiple logistic regression analyses were performed, adjusting for parent age, health literacy (Newest Vital Sign); child asthma severity, medications; and site. 217 parents were randomized (109 intervention and 108 control). Parents who received the low-literacy plan were (1) less likely to make an error in knowledge of medications to take everyday and when sick compared to parents who received the standard plan (63.0 vs. 77.3%, p = 0.03; adjusted odds ratio [AOR] = 0.5[95% confidence interval: 0.2-0.9]) and (2) less likely to make an error regarding spacer use (14.0 vs. 51.1%, p < 0.001; AOR = 0.1 [0.06-0.3]). No difference in error in appropriate emergency response was seen (43.1 vs. 48.1%, p = 0.5). Use of a low-literacy WAAP was associated with better parent understanding of asthma management. Further study is needed to assess whether the use of this action plan improves child asthma outcomes.

  9. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  10. Diagnostic Errors in Ambulatory Care: Dimensions and Preventive Strategies

    ERIC Educational Resources Information Center

    Singh, Hardeep; Weingart, Saul N.

    2009-01-01

    Despite an increasing focus on patient safety in ambulatory care, progress in understanding and reducing diagnostic errors in this setting lag behind many other safety concerns such as medication errors. To explore the extent and nature of diagnostic errors in ambulatory care, we identified five dimensions of ambulatory care from which errors may…

  11. On the consistency of QCBED structure factor measurements for TiO 2 (Rutile)

    DOE PAGES

    Jiang, Bin; Zuo, Jian -Min; Friis, Jesper; ...

    2003-09-16

    The same Bragg reflection in TiO 2 from twelve different CBED patterns (from different crystals, orientations and thicknesses) are analysed quantitatively in order to evaluate the consistency of the QCBED method for bond-charge mapping. The standard deviation in the resulting distribution of derived X-ray structure factors is found to be an order of magnitude smaller than that in conventional X-ray work, and the standard error (0.026% for F X(110)) is slightly better than obtained by the X-ray Pendellosung method applied to silicon. This is sufficiently accuracy to distinguish between atomic, covalent and ionic models of bonding. We describe the importancemore » of extracting experimental parameters from CCD camera characterization, and of surface oxidation and crystal shape. Thus, the current experiments show that the QCBED method is now a robust and powerful tool for low order structure factor measurement, which does not suffer from the large extinction (multiple scattering) errors which occur in inorganic X-ray crystallography, and may be applied to nanocrystals. Our results will be used to understand the role of d electrons in the chemical bonding of TiO 2.« less

  12. Mechanism of error-free DNA synthesis across N1-methyl-deoxyadenosine by human DNA polymerase-ι

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Rinku; Choudhury, Jayati Roy; Buku, Angeliki

    N1-methyl-deoxyadenosine (1-MeA) is formed by methylation of deoxyadenosine at the N1 atom. 1-MeA presents a block to replicative DNA polymerases due to its inability to participate in Watson-Crick (W-C) base pairing. Here we determine how human DNA polymerase-ι (Polι) promotes error-free replication across 1-MeA. Steady state kinetic analyses indicate that Polι is ~100 fold more efficient in incorporating the correct nucleotide T versus the incorrect nucleotide C opposite 1-MeA. To understand the basis of this selectivity, we determined ternary structures of Polι bound to template 1-MeA and incoming dTTP or dCTP. In both structures, template 1-MeA rotates to the synmore » conformation but pairs differently with dTTP versus dCTP. Thus, whereas dTTP partakes in stable Hoogsteen base pairing with 1-MeA, dCTP fails to gain a “foothold” and is largely disordered. Together, our kinetic and structural studies show how Polι maintains discrimination between correct and incorrect incoming nucleotide opposite 1-MeA in preserving genome integrity.« less

  13. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    PubMed

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  14. Three-dimensional tertiary structure of yeast phenylalanine transfer RNA

    NASA Technical Reports Server (NTRS)

    Kim, S. H.; Sussman, J. L.; Suddath, F. L.; Quigley, G. J.; Mcpherson, A.; Wang, A. H. J.; Seeman, N. C.; Rich, A.

    1974-01-01

    Results of an analysis and interpretation of a 3-A electron density map of yeast phenylalanine transfer RNA. Some earlier detailed assignments of nucleotide residues to electron density peaks are found to be in error, even though the overall tracing of the backbone conformation of yeast phenylalanine transfer RNA was generally correct. A new, more comprehensive interpretation is made which makes it possible to define the tertiary interactions in the molecule. The new interpretation makes it possible to visualize a number of tertiary interactions which not only explain the structural role of most of the bases which are constant in transfer RNAs, but also makes it possible to understand in a direct and simple fashion the chemical modification data on transfer RNA. In addition, this pattern of tertiary interactions provides a basis for understanding the general three-dimensional folding of all transfer RNA molecules.

  15. Towards a realistic simulation of boreal summer tropical rainfall climatology in state-of-the-art coupled models: role of the background snow-free land albedo

    NASA Astrophysics Data System (ADS)

    Terray, P.; Sooraj, K. P.; Masson, S.; Krishna, R. P. M.; Samson, G.; Prajeesh, A. G.

    2017-07-01

    State-of-the-art global coupled models used in seasonal prediction systems and climate projections still have important deficiencies in representing the boreal summer tropical rainfall climatology. These errors include prominently a severe dry bias over all the Northern Hemisphere monsoon regions, excessive rainfall over the ocean and an unrealistic double inter-tropical convergence zone (ITCZ) structure in the tropical Pacific. While these systematic errors can be partly reduced by increasing the horizontal atmospheric resolution of the models, they also illustrate our incomplete understanding of the key mechanisms controlling the position of the ITCZ during boreal summer. Using a large collection of coupled models and dedicated coupled experiments, we show that these tropical rainfall errors are partly associated with insufficient surface thermal forcing and incorrect representation of the surface albedo over the Northern Hemisphere continents. Improving the parameterization of the land albedo in two global coupled models leads to a large reduction of these systematic errors and further demonstrates that the Northern Hemisphere subtropical deserts play a seminal role in these improvements through a heat low mechanism.

  16. Disclosure of adverse events and errors in surgical care: challenges and strategies for improvement.

    PubMed

    Lipira, Lauren E; Gallagher, Thomas H

    2014-07-01

    The disclosure of adverse events to patients, including those caused by medical errors, is a critical part of patient-centered healthcare and a fundamental component of patient safety and quality improvement. Disclosure benefits patients, providers, and healthcare institutions. However, the act of disclosure can be difficult for physicians. Surgeons struggle with disclosure in unique ways compared with other specialties, and disclosure in the surgical setting has specific challenges. The frequency of surgical adverse events along with a dysfunctional tort system, the team structure of surgical staff, and obstacles created inadvertently by existing surgical patient safety initiatives may contribute to an environment not conducive to disclosure. Fortunately, there are multiple strategies to address these barriers. Participation in communication and resolution programs, integration of Just Culture principles, surgical team disclosure planning, refinement of informed consent and morbidity and mortality processes, surgery-specific professional standards, and understanding the complexities of disclosing other clinicians' errors all have the potential to help surgeons provide patients with complete, satisfactory disclosures. Improvement in the regularity and quality of disclosures after surgical adverse events and errors will be key as the field of patient safety continues to advance.

  17. A statistical study of radio-source structure effects on astrometric very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1989-01-01

    Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.

  18. Fundamental Studies of Crystal Growth of Microporous Materials

    NASA Technical Reports Server (NTRS)

    Singh, Ramsharan; Doolittle, John, Jr.; Payra, Pramatha; Dutta, Prabir K.; George, Michael A.; Ramachandran, Narayanan; Schoeman, Brian J.

    2003-01-01

    Microporous materials are framework structures with well-defined porosity, often of molecular dimensions. Zeolites contain aluminum and silicon atoms in their framework and are the most extensively studied amongst all microporous materials. Framework structures with P, Ga, Fe, Co, Zn, B, Ti and a host of other elements have also been made. Typical synthesis of microporous materials involve mixing the framework elements (or compounds, thereof) in a basic solution, followed by aging in some cases and then heating at elevated temperatures. This process is termed hydrothermal synthesis, and involves complex chemical and physical changes. Because of a limited understanding of this process, most synthesis advancements happen by a trial and error approach. There is considerable interest in understanding the synthesis process at a molecular level with the expectation that eventually new framework structures will be built by design. The basic issues in the microporous materials crystallization process include: (a) Nature of the molecular units responsible for the crystal nuclei formation; (b) Nature of the nuclei and nucleation process; (c) Growth process of the nuclei into crystal; (d) Morphological control and size of the resulting crystal; (e) Surface structure of the resulting crystals; and (f) Transformation of frameworks into other frameworks or condensed structures.

  19. Cryo-EM structure of a late pre-40S ribosomal subunit from Saccharomyces cerevisiae

    PubMed Central

    Schmidt, Christian; Berninghausen, Otto; Becker, Thomas

    2017-01-01

    Mechanistic understanding of eukaryotic ribosome formation requires a detailed structural knowledge of the numerous assembly intermediates, generated along a complex pathway. Here, we present the structure of a late pre-40S particle at 3.6 Å resolution, revealing in molecular detail how assembly factors regulate the timely folding of pre-18S rRNA. The structure shows that, rather than sterically blocking 40S translational active sites, the associated assembly factors Tsr1, Enp1, Rio2 and Pno1 collectively preclude their final maturation, thereby preventing untimely tRNA and mRNA binding and error prone translation. Moreover, the structure explains how Pno1 coordinates the 3’end cleavage of the 18S rRNA by Nob1 and how the late factor’s removal in the cytoplasm ensures the structural integrity of the maturing 40S subunit. PMID:29155690

  20. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-09

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  1. At the cross-roads: an on-road examination of driving errors at intersections.

    PubMed

    Young, Kristie L; Salmon, Paul M; Lenné, Michael G

    2013-09-01

    A significant proportion of road trauma occurs at intersections. Understanding the nature of driving errors at intersections therefore has the potential to lead to significant injury reductions. To further understand how the complexity of modern intersections shapes behaviour of these errors are compared to errors made mid-block, and the role of wider systems failures in intersection error causation is investigated in an on-road study. Twenty-five participants drove a pre-determined urban route incorporating 25 intersections. Two in-vehicle observers recorded the errors made while a range of other data was collected, including driver verbal protocols, video, driver eye glance behaviour and vehicle data (e.g., speed, braking and lane position). Participants also completed a post-trial cognitive task analysis interview. Participants were found to make 39 specific error types, with speeding violations the most common. Participants made significantly more errors at intersections compared to mid-block, with misjudgement, action and perceptual/observation errors more commonly observed at intersections. Traffic signal configuration was found to play a key role in intersection error causation, with drivers making more errors at partially signalised compared to fully signalised intersections. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  3. Sequence-structure mapping errors in the PDB: OB-fold domains

    PubMed Central

    Venclovas, Česlovas; Ginalski, Krzysztof; Kang, Chulhee

    2004-01-01

    The Protein Data Bank (PDB) is the single most important repository of structural data for proteins and other biologically relevant molecules. Therefore, it is critically important to keep the PDB data, as much as possible, error-free. In this study, we have analyzed PDB crystal structures possessing oligonucleotide/oligosaccharide binding (OB)-fold, one of the highly populated folds, for the presence of sequence-structure mapping errors. Using energy-based structure quality assessment coupled with sequence analyses, we have found that there are at least five OB-structures in the PDB that have regions where sequences have been incorrectly mapped onto the structure. We have demonstrated that the combination of these computation techniques is effective not only in detecting sequence-structure mapping errors, but also in providing guidance to correct them. Namely, we have used results of computational analysis to direct a revision of X-ray data for one of the PDB entries containing a fairly inconspicuous sequence-structure mapping error. The revised structure has been deposited with the PDB. We suggest use of computational energy assessment and sequence analysis techniques to facilitate structure determination when homologs having known structure are available to use as a reference. Such computational analysis may be useful in either guiding the sequence-structure assignment process or verifying the sequence mapping within poorly defined regions. PMID:15133161

  4. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  5. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  6. Executive function and functional and structural brain differences in middle-age adults with autism spectrum disorder.

    PubMed

    Braden, B Blair; Smith, Christopher J; Thompson, Amiee; Glaspy, Tyler K; Wood, Emily; Vatsa, Divya; Abbott, Angela E; McGee, Samuel C; Baxter, Leslie C

    2017-12-01

    There is a rapidly growing group of aging adults with autism spectrum disorder (ASD) who may have unique needs, yet cognitive and brain function in older adults with ASD is understudied. We combined functional and structural neuroimaging and neuropsychological tests to examine differences between middle-aged men with ASD and matched neurotypical (NT) men. Participants (ASD, n = 16; NT, n = 17) aged 40-64 years were well-matched according to age, IQ (range: 83-131), and education (range: 9-20 years). Middle-age adults with ASD made more errors on an executive function task (Wisconsin Card Sorting Test) but performed similarly to NT adults on tests of delayed verbal memory (Rey Auditory Verbal Learning Test) and local visual search (Embedded Figures Task). Independent component analysis of a functional MRI working memory task (n-back) completed by most participants (ASD = 14, NT = 17) showed decreased engagement of a cortico-striatal-thalamic-cortical neural network in older adults with ASD. Structurally, older adults with ASD had reduced bilateral hippocampal volumes, as measured by FreeSurfer. Findings expand our understanding of ASD as a lifelong condition with persistent cognitive and functional and structural brain differences evident at middle-age. Autism Res 2017, 10: 1945-1959. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. We compared cognitive abilities and brain measures between 16 middle-age men with high-functioning autism spectrum disorder (ASD) and 17 typical middle-age men to better understand how aging affects an older group of adults with ASD. Men with ASD made more errors on a test involving flexible thinking, had less activity in a flexible thinking brain network, and had smaller volume of a brain structure related to memory than typical men. We will follow these older adults over time to determine if aging changes are greater for individuals with ASD. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  7. Genes and inheritance.

    PubMed

    Middelton, L A; Peters, K F

    2001-10-01

    The information gained from the Human Genome Project and related genetic research will undoubtedly create significant changes in healthcare practice. It is becoming increasingly clear that nurses in all areas of clinical practice will require a fundamental understanding of basic genetics. This article provides the oncology nurse with an overview of basic genetic concepts, including inheritance patterns of single gene conditions, pedigree construction, chromosome aberrations, and the multifactorial basis underlying the common diseases of adulthood. Normal gene structure and function are introduced and the biochemistry of genetic errors is described.

  8. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation

    PubMed Central

    Balachandran, Ramya; Labadie, Robert F.

    2015-01-01

    Purpose A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. Methods An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. Results The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of 45° and higher as well as longer cantilevered drill lengths. Conclusion The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure. PMID:26183149

  9. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation.

    PubMed

    Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F

    2016-03-01

    A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.

  10. Error-Eliciting Problems: Fostering Understanding and Thinking

    ERIC Educational Resources Information Center

    Lim, Kien H.

    2014-01-01

    Student errors are springboards for analyzing, reasoning, and justifying. The mathematics education community recognizes the value of student errors, noting that "mistakes are seen not as dead ends but rather as potential avenues for learning." To induce specific errors and help students learn, choose tasks that might produce mistakes.…

  11. Scaffolding--How Can Contingency Lead to Successful Learning When Dealing with Errors?

    ERIC Educational Resources Information Center

    Wischgoll, Anke; Pauli, Christine; Reusser, Kurt

    2015-01-01

    Errors indicate learners' misunderstanding and can provide learning opportunities. Providing learning support which is contingent on learners' needs when errors occur is considered effective for developing learners' understanding. The current investigation examines how tutors and tutees interact productively with errors when working on a…

  12. Origins of coevolution between residues distant in protein 3D structures

    PubMed Central

    Ovchinnikov, Sergey; Kamisetty, Hetunandan; Baker, David

    2017-01-01

    Residue pairs that directly coevolve in protein families are generally close in protein 3D structures. Here we study the exceptions to this general trend—directly coevolving residue pairs that are distant in protein structures—to determine the origins of evolutionary pressure on spatially distant residues and to understand the sources of error in contact-based structure prediction. Over a set of 4,000 protein families, we find that 25% of directly coevolving residue pairs are separated by more than 5 Å in protein structures and 3% by more than 15 Å. The majority (91%) of directly coevolving residue pairs in the 5–15 Å range are found to be in contact in at least one homologous structure—these exceptions arise from structural variation in the family in the region containing the residues. Thirty-five percent of the exceptions greater than 15 Å are at homo-oligomeric interfaces, 19% arise from family structural variation, and 27% are in repeat proteins likely reflecting alignment errors. Of the remaining long-range exceptions (<1% of the total number of coupled pairs), many can be attributed to close interactions in an oligomeric state. Overall, the results suggest that directly coevolving residue pairs not in repeat proteins are spatially proximal in at least one biologically relevant protein conformation within the family; we find little evidence for direct coupling between residues at spatially separated allosteric and functional sites or for increased direct coupling between residue pairs on putative allosteric pathways connecting them. PMID:28784799

  13. Error reduction by combining strapdown inertial measurement units in a baseball stitch

    NASA Astrophysics Data System (ADS)

    Tracy, Leah

    A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.

  14. Clinical implementation and error sensitivity of a 3D quality assurance protocol for prostate and thoracic IMRT

    PubMed Central

    Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed

    2015-01-01

    This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299

  15. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  16. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  17. Unraveling the unknown areas of the human metabolome: the role of infrared ion spectroscopy.

    PubMed

    Martens, Jonathan; Berden, Giel; Bentlage, Herman; Coene, Karlien L M; Engelke, Udo F; Wishart, David; van Scherpenzeel, Monique; Kluijtmans, Leo A J; Wevers, Ron A; Oomens, Jos

    2018-05-01

    The identification of molecular biomarkers is critical for diagnosing and treating patients and for establishing a fundamental understanding of the pathophysiology and underlying biochemistry of inborn errors of metabolism. Currently, liquid chromatography/high-resolution mass spectrometry and nuclear magnetic resonance spectroscopy are the principle methods used for biomarker research and for structural elucidation of small molecules in patient body fluids. While both are powerful techniques, several limitations exist that often make the identification of unknown compounds challenging. Here, we describe how infrared ion spectroscopy has the potential to be a valuable orthogonal technique that provides highly-specific molecular structure information while maintaining ultra-high sensitivity. Here, we characterize and distinguish two well-known biomarkers of inborn errors of metabolism, glutaric acid for glutaric aciduria and ethylmalonic acid for short-chain acyl-CoA dehydrogenase deficiency, using infrared ion spectroscopy. In contrast to tandem mass spectra, in which ion fragments can hardly be predicted, we show that the prediction of an IR spectrum allows reference-free identification in the case that standard compounds are either commercially or synthetically unavailable. Finally, we illustrate how functional group information can be obtained from an IR spectrum for an unknown and how this is valuable information to, for example, narrow down a list of candidate structures resulting from a database query. Early diagnosis in inborn errors of metabolism is crucial for enabling treatment and depends on the identification of biomarkers specific for the disorder. Infrared ion spectroscopy has the potential to play a pivotal role in the identification of challenging biomarkers.

  18. Quantifying Biomass and Bare Earth Changes from the Hayman Fire Using Multi-temporal Lidar

    NASA Astrophysics Data System (ADS)

    Stoker, J. M.; Kaufmann, M. R.; Greenlee, S. K.

    2007-12-01

    Small-footprint multiple-return lidar data collected in the Cheesman Lake property prior to the 2002 Hayman fire in Colorado provided an excellent opportunity to evaluate Lidar as a tool to predict and analyze fire effects on both soil erosion and overstory structure. Re-measuring this area and applying change detection techniques allowed for analyses at a high level of detail. Our primary objectives focused on the use of change detection techniques using multi-temporal lidar data to: (1) evaluate the effectiveness of change detection to identify and quantify areas of erosion or deposition caused by post-fire rain events and rehab activities; (2) identify and quantify areas of biomass loss or forest structure change due to the Hayman fire; and (3) examine effects of pre-fire fuels and vegetation structure derived from lidar data on patterns of burn severity. While we were successful in identifying areas where changes occurred, the original error bounds on the variation in actual elevations made it difficult, if not misleading to quantify volumes of material changed on a per pixel basis. In order to minimize these variations in the two datasets, we investigated several correction and co-registration methodologies. The lessons learned from this project highlight the need for a high level of flight planning and understanding of errors in a lidar dataset in order to correctly estimate and report quantities of vertical change. Directly measuring vertical change using only lidar without ancillary information can provide errors that could make quantifications confusing, especially in areas with steep slopes.

  19. The importance of robust error control in data compression applications

    NASA Technical Reports Server (NTRS)

    Woolley, S. I.

    1993-01-01

    Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.

  20. Understanding diagnostic errors in medicine: a lesson from aviation

    PubMed Central

    Singh, H; Petersen, L A; Thomas, E J

    2006-01-01

    The impact of diagnostic errors on patient safety in medicine is increasingly being recognized. Despite the current progress in patient safety research, the understanding of such errors and how to prevent them is inadequate. Preliminary research suggests that diagnostic errors have both cognitive and systems origins. Situational awareness is a model that is primarily used in aviation human factors research that can encompass both the cognitive and the systems roots of such errors. This conceptual model offers a unique perspective in the study of diagnostic errors. The applicability of this model is illustrated by the analysis of a patient whose diagnosis of spinal cord compression was substantially delayed. We suggest how the application of this framework could lead to potential areas of intervention and outline some areas of future research. It is possible that the use of such a model in medicine could help reduce errors in diagnosis and lead to significant improvements in patient care. Further research is needed, including the measurement of situational awareness and correlation with health outcomes. PMID:16751463

  1. Structure and Processing in Tunisian Arabic: Speech Error Data

    ERIC Educational Resources Information Center

    Hamrouni, Nadia

    2010-01-01

    This dissertation presents experimental research on speech errors in Tunisian Arabic. The nonconcatenative morphology of Arabic shows interesting interactions of phrasal and lexical constraints with morphological structure during language production. The central empirical questions revolve around properties of "exchange errors". These…

  2. Errors and Understanding: The Effects of Error-Management Training on Creative Problem-Solving

    ERIC Educational Resources Information Center

    Robledo, Issac C.; Hester, Kimberly S.; Peterson, David R.; Barrett, Jamie D.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.

    2012-01-01

    People make errors in their creative problem-solving efforts. The intent of this article was to assess whether error-management training would improve performance on creative problem-solving tasks. Undergraduates were asked to solve an educational leadership problem known to call for creative thought where problem solutions were scored for…

  3. The Influence of Friction Stir Weld Tool Form and Welding Parameters on Weld Structure and Properties: Nugget Bulge in Self-Reacting Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Schneider, Judy; Nunes, Arthur C., Jr.; Brendel, Michael S.

    2010-01-01

    Although friction stir welding (FSW) was patented in 1991, process development has been based upon trial and error and the literature still exhibits little understanding of the mechanisms determining weld structure and properties. New concepts emerging from a better understanding of these mechanisms enhance the ability of FSW engineers to think about the FSW process in new ways, inevitably leading to advances in the technology. A kinematic approach in which the FSW flow process is decomposed into several simple flow components has been found to explain the basic structural features of FSW welds and to relate them to tool geometry and process parameters. Using this modelling approach, this study reports on a correlation between the features of the weld nugget, process parameters, weld tool geometry, and weld strength. This correlation presents a way to select process parameters for a given tool geometry so as to optimize weld strength. It also provides clues that may ultimately explain why the weld strength varies within the sample population.

  4. Computational prediction of kink properties of helices in membrane proteins

    NASA Astrophysics Data System (ADS)

    Mai, T.-L.; Chen, C.-M.

    2014-02-01

    We have combined molecular dynamics simulations and fold identification procedures to investigate the structure of 696 kinked and 120 unkinked transmembrane (TM) helices in the PDBTM database. Our main aim of this study is to understand the formation of helical kinks by simulating their quasi-equilibrium heating processes, which might be relevant to the prediction of their structural features. The simulated structural features of these TM helices, including the position and the angle of helical kinks, were analyzed and compared with statistical data from PDBTM. From quasi-equilibrium heating processes of TM helices with four very different relaxation time constants, we found that these processes gave comparable predictions of the structural features of TM helices. Overall, 95 % of our best kink position predictions have an error of no more than two residues and 75 % of our best angle predictions have an error of less than 15°. Various structure assessments have been carried out to assess our predicted models of TM helices in PDBTM. Our results show that, in 696 predicted kinked helices, 70 % have a RMSD less than 2 Å, 71 % have a TM-score greater than 0.5, 69 % have a MaxSub score greater than 0.8, 60 % have a GDT-TS score greater than 85, and 58 % have a GDT-HA score greater than 70. For unkinked helices, our predicted models are also highly consistent with their crystal structure. These results provide strong supports for our assumption that kink formation of TM helices in quasi-equilibrium heating processes is relevant to predicting the structure of TM helices.

  5. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  6. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  7. Power analysis to detect treatment effect in longitudinal studies with heterogeneous errors and incomplete data.

    PubMed

    Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián

    2016-08-01

     S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure.   For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear.   The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes.   The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.

  8. Alcohol effects on performance monitoring and adjustment: affect modulation and impairment of evaluative cognitive control.

    PubMed

    Bartholow, Bruce D; Henry, Erika A; Lust, Sarah A; Saults, J Scott; Wood, Phillip K

    2012-02-01

    Alcohol is known to impair self-regulatory control of behavior, though mechanisms for this effect remain unclear. Here, we tested the hypothesis that alcohol's reduction of negative affect (NA) is a key mechanism for such impairment. This hypothesis was tested by measuring the amplitude of the error-related negativity (ERN), a component of the event-related brain potential (ERP) posited to reflect the extent to which behavioral control failures are experienced as distressing, while participants completed a laboratory task requiring self-regulatory control. Alcohol reduced both the ERN and error positivity (Pe) components of the ERP following errors and impaired typical posterror behavioral adjustment. Structural equation modeling indicated that effects of alcohol on both the ERN and posterror adjustment were significantly mediated by reductions in NA. Effects of alcohol on Pe amplitude were unrelated to posterror adjustment, however. These findings indicate a role for affect modulation in understanding alcohol's effects on self-regulatory impairment and more generally support theories linking the ERN with a distress-related response to control failures. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  9. Mismeasurement and the resonance of strong confounders: correlated errors.

    PubMed

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  10. Prevention of errors and user alienation in healthcare IT integration programmes.

    PubMed

    Benson, Tim

    2007-01-01

    The design, development and implementation stages of integrated computer projects require close collaboration between users and developers, but this is particularly difficult where there are multiple specialties, organisations and system suppliers. Users become alienated if they are not consulted, but consultation is meaningless if they cannot understand the specifications showing exactly what is proposed. We need stringent specifications that users and developers can review and check before most of the work is done. Avoidable errors lead to delays and cost over-runs. The number of errors is a function of the likelihood of misunderstanding any part of the specification, the number of individuals involved and the number of choices or options. One way to reduce these problems is to provide a conceptual design specification, comprising detailed Unified Modelling Language (UML) class and activity diagrams, data definitions and terminology, in addition to conventional technology-specific specifications. A conceptual design specification needs to be straightforward to understand and use, transparent and unambiguous. People find structured diagrams, such as maps, charts and blueprints, easier to use than reports or tables. Other desirable properties include being technology-independent, comprehensive, stringent, coherent, consistent, composed from reusable elements and computer-readable (XML). When users and developers share the same agreed conceptual design specification, this can be one of the master documents of a formal contract between the stakeholders. No extra meaning should be added during the later stages of the project life cycle.

  11. Neurochemical enhancement of conscious error awareness.

    PubMed

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  12. Nurses' attitudes and perceived barriers to the reporting of medication administration errors.

    PubMed

    Yung, Hai-Peng; Yu, Shu; Chu, Chi; Hou, I-Ching; Tang, Fu-In

    2016-07-01

    (1) To explore the attitudes and perceived barriers to reporting medication administration errors and (2) to understand the characteristics of - and nurses' feelings - about error reports. Under-reporting of medication administration errors is a global concern related to the safety of patient care. Understanding nurses' attitudes and perceived barriers to error reporting is the initial step to increasing the reporting rate. A cross-sectional, descriptive survey with a self-administered questionnaire was completed by the nurses of a medical centre hospital in Taiwan. A total of 306 nurses participated in the study. Nurses' attitudes towards medication administration error reporting were inclined towards positive. The major perceived barrier was fear of the consequences after reporting. The results demonstrated that 88.9% of medication administration errors were reported orally, whereas 19.0% were reported through the hospital internet system. Self-recrimination was the common feeling of nurses after the commission of an medication administration error. Even if hospital management encourages errors to be reported without recrimination, nurses' attitudes toward medication administration error reporting are not very positive and fear is the most prominent barrier contributing to underreporting. Nursing managers should establish anonymous reporting systems and counselling classes to create a secure atmosphere to reduce nurses' fear and provide incentives to encourage reporting. © 2016 John Wiley & Sons Ltd.

  13. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  14. Predicting and interpreting identification errors in military vehicle training using multidimensional scaling.

    PubMed

    Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R

    2014-01-01

    We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.

  15. To Err Is Human; To Structurally Prime from Errors Is Also Human

    ERIC Educational Resources Information Center

    Slevc, L. Robert; Ferreira, Victor S.

    2013-01-01

    Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…

  16. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  17. Logical Fallacies and the Abuse of Climate Science: Fire, Water, and Ice

    NASA Astrophysics Data System (ADS)

    Gleick, P. H.

    2012-12-01

    Good policy without good science and analysis is unlikely. Good policy with bad science is even more unlikely. Unfortunately, there is a long history of abuse or misuse of science in fields with ideological, religious, or economically controversial policy implications, such as planetary physics during the time of Galileo, the evolution debate, or climate change. Common to these controversies are what are known as "logical fallacies" -- patterns of reasoning that are always -- or at least commonly -- wrong due to a flaw in the structure of the argument that renders the argument invalid. All scientists should understand the nature of logical fallacies in order to (1) avoid making mistakes and reaching unsupported conclusion, (2) help them understand and refute the flaws in arguments made by others, and (3) aid in communicating science to the public. This talk will present a series of logical fallacies often made in the climate science debate, including "arguments from ignorance," "arguments from error," "arguments from misinterpretation," and "cherry picking." Specific examples will be presented in the area of temperature analysis, water resources, and ice dynamics, with a focus on selective use or misuse of data.; "Argument from Error" - an amusing example of a logical fallacy.

  18. An embedded longitudinal multi-faceted qualitative evaluation of a complex cluster randomized controlled trial aiming to reduce clinically important errors in medicines management in general practice.

    PubMed

    Cresswell, Kathrin M; Sadler, Stacey; Rodgers, Sarah; Avery, Anthony; Cantrill, Judith; Murray, Scott A; Sheikh, Aziz

    2012-06-08

    There is a need to shed light on the pathways through which complex interventions mediate their effects in order to enable critical reflection on their transferability. We sought to explore and understand key stakeholder accounts of the acceptability, likely impact and strategies for optimizing and rolling-out a successful pharmacist-led information technology-enabled (PINCER) intervention, which substantially reduced the risk of clinically important errors in medicines management in primary care. Data were collected at two geographical locations in central England through a combination of one-to-one longitudinal semi-structured telephone interviews (one at the beginning of the trial and another when the trial was well underway), relevant documents, and focus group discussions following delivery of the PINCER intervention. Participants included PINCER pharmacists, general practice staff, researchers involved in the running of the trial, and primary care trust staff. PINCER pharmacists were interviewed at three different time-points during the delivery of the PINCER intervention. Analysis was thematic with diffusion of innovation theory providing a theoretical framework. We conducted 52 semi-structured telephone interviews and six focus group discussions with 30 additional participants. In addition, documentary data were collected from six pharmacist diaries, along with notes from four meetings of the PINCER pharmacists and feedback meetings from 34 practices. Key findings that helped to explain the success of the PINCER intervention included the perceived importance of focusing on prescribing errors to all stakeholders, and the credibility and appropriateness of a pharmacist-led intervention to address these shortcomings. Central to this was the face-to-face contact and relationship building between pharmacists and a range of practice staff, and pharmacists' explicitly designated role as a change agent. However, important concerns were identified about the likely sustainability of this new model of delivering care, in the absence of an appropriate support network for pharmacists and career development pathways. This embedded qualitative inquiry has helped to understand the complex organizational and social environment in which the trial was undertaken and the PINCER intervention was delivered. The longitudinal element has given insight into the dynamic changes and developments over time. Medication errors and ways to address these are high on stakeholders' agendas. Our results further indicate that pharmacists were, because of their professional standing and skill-set, able to engage with the complex general practice environment and able to identify and manage many clinically important errors in medicines management. The transferability of the PINCER intervention approach, both in relation to other prescribing errors and to other practices, is likely to be high.

  19. An embedded longitudinal multi-faceted qualitative evaluation of a complex cluster randomized controlled trial aiming to reduce clinically important errors in medicines management in general practice

    PubMed Central

    2012-01-01

    Background There is a need to shed light on the pathways through which complex interventions mediate their effects in order to enable critical reflection on their transferability. We sought to explore and understand key stakeholder accounts of the acceptability, likely impact and strategies for optimizing and rolling-out a successful pharmacist-led information technology-enabled (PINCER) intervention, which substantially reduced the risk of clinically important errors in medicines management in primary care. Methods Data were collected at two geographical locations in central England through a combination of one-to-one longitudinal semi-structured telephone interviews (one at the beginning of the trial and another when the trial was well underway), relevant documents, and focus group discussions following delivery of the PINCER intervention. Participants included PINCER pharmacists, general practice staff, researchers involved in the running of the trial, and primary care trust staff. PINCER pharmacists were interviewed at three different time-points during the delivery of the PINCER intervention. Analysis was thematic with diffusion of innovation theory providing a theoretical framework. Results We conducted 52 semi-structured telephone interviews and six focus group discussions with 30 additional participants. In addition, documentary data were collected from six pharmacist diaries, along with notes from four meetings of the PINCER pharmacists and feedback meetings from 34 practices. Key findings that helped to explain the success of the PINCER intervention included the perceived importance of focusing on prescribing errors to all stakeholders, and the credibility and appropriateness of a pharmacist-led intervention to address these shortcomings. Central to this was the face-to-face contact and relationship building between pharmacists and a range of practice staff, and pharmacists’ explicitly designated role as a change agent. However, important concerns were identified about the likely sustainability of this new model of delivering care, in the absence of an appropriate support network for pharmacists and career development pathways. Conclusions This embedded qualitative inquiry has helped to understand the complex organizational and social environment in which the trial was undertaken and the PINCER intervention was delivered. The longitudinal element has given insight into the dynamic changes and developments over time. Medication errors and ways to address these are high on stakeholders’ agendas. Our results further indicate that pharmacists were, because of their professional standing and skill-set, able to engage with the complex general practice environment and able to identify and manage many clinically important errors in medicines management. The transferability of the PINCER intervention approach, both in relation to other prescribing errors and to other practices, is likely to be high. PMID:22682095

  20. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe

    2016-04-01

    Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.

  1. Network structure from rich but noisy data

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.

    2018-06-01

    Driven by growing interest across the sciences, a large number of empirical studies have been conducted in recent years of the structure of networks ranging from the Internet and the World Wide Web to biological networks and social networks. The data produced by these experiments are often rich and multimodal, yet at the same time they may contain substantial measurement error1-7. Accurate analysis and understanding of networked systems requires a way of estimating the true structure of networks from such rich but noisy data8-15. Here we describe a technique that allows us to make optimal estimates of network structure from complex data in arbitrary formats, including cases where there may be measurements of many different types, repeated observations, contradictory observations, annotations or metadata, or missing data. We give example applications to two different social networks, one derived from face-to-face interactions and one from self-reported friendships.

  2. Animal movement constraints improve resource selection inference in the presence of telemetry error

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.

    2016-01-01

    Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.

  3. Choose and choose again: appearance-reality errors, pragmatics and logical ability.

    PubMed

    Deák, Gedeon O; Enright, Brian

    2006-05-01

    In the Appearance/Reality (AR) task some 3- and 4-year-old children make perseverative errors: they choose the same word for the appearance and the function of a deceptive object. Are these errors specific to the AR task, or signs of a general question-answering problem? Preschoolers completed five tasks: AR; simple successive forced-choice question pairs (QP); flexible naming of objects (FN); working memory (WM) span; and indeterminacy detection (ID). AR errors correlated with QP errors. Insensitivity to indeterminacy predicted perseveration in both tasks. Neither WM span nor flexible naming predicted other measures. Age predicted sensitivity to indeterminacy. These findings suggest that AR tests measure a pragmatic understanding; specifically, different questions about a topic usually call for different answers. This understanding is related to the ability to detect indeterminacy of each question in a series. AR errors are unrelated to the ability to represent an object as belonging to multiple categories, to working memory span, or to inhibiting previously activated words.

  4. The GEnes in Myopia (GEM) study in understanding the aetiology of refractive errors.

    PubMed

    Baird, Paul N; Schäche, Maria; Dirani, Mohamed

    2010-11-01

    Refractive errors represent the leading cause of correctable vision impairment and blindness in the world with an estimated 2 billion people affected. Refractive error refers to a group of refractive conditions including hypermetropia, myopia, astigmatism and presbyopia but relatively little is known about their aetiology. In order to explore the potential role of genetic determinants in refractive error the "GEnes in Myopia (GEM) study" was established in 2004. The findings that have resulted from this study have not only provided greater insight into the role of genes and other factors involved in myopia but have also gone some way to uncovering the aetiology of other refractive errors. This review will describe some of the major findings of the GEM study and their relative contribution to the literature, illuminate where the deficiencies are in our understanding of the development of refractive errors and how we will advance this field in the future. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. CAMD studies of coal structure and coal liquefaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faulon, J.L.; Carlson, G.A.

    The macromolecular structure of coal is essential to understand the mechanisms occurring during coal liquefaction. Many attempts to model coal structure can be found in the literature. More specifically for high volatile bituminous coal, the subject of interest the most commonly quoted models are the models of Given, Wiser, Solomon, and Shinn. In past work, the authors`s have used computer-aided molecular design (CAMD) to develop three-dimensional representations for the above coal models. The three-dimensional structures were energy minimized using molecular mechanics and molecular dynamics. True density and micopore volume were evaluated for each model. With the exception of Given`s model,more » the computed density values were found to be in agreement with the corresponding experimental results. The above coal models were constructed by a trial and error technique consisting of a manual fitting of the-analytical data. It is obvious that for each model the amount of data is small compared to the actual complexity of coal, and for all of the models more than one structure can be built. Hence, the process by which one structure is chosen instead of another is not clear. In fact, all the authors agree that the structure they derived was only intended to represent an {open_quotes}average{close_quotes} coal model rather than a unique correct structure. The purpose of this program is further develop CAMD techniques to increase the understanding of coal structure and its relationship to coal liquefaction.« less

  6. Advances in Homology Protein Structure Modeling

    PubMed Central

    Xiang, Zhexin

    2007-01-01

    Homology modeling plays a central role in determining protein structure in the structural genomics project. The importance of homology modeling has been steadily increasing because of the large gap that exists between the overwhelming number of available protein sequences and experimentally solved protein structures, and also, more importantly, because of the increasing reliability and accuracy of the method. In fact, a protein sequence with over 30% identity to a known structure can often be predicted with an accuracy equivalent to a low-resolution X-ray structure. The recent advances in homology modeling, especially in detecting distant homologues, aligning sequences with template structures, modeling of loops and side chains, as well as detecting errors in a model, have contributed to reliable prediction of protein structure, which was not possible even several years ago. The ongoing efforts in solving protein structures, which can be time-consuming and often difficult, will continue to spur the development of a host of new computational methods that can fill in the gap and further contribute to understanding the relationship between protein structure and function. PMID:16787261

  7. Causes and Prevention of Laparoscopic Bile Duct Injuries

    PubMed Central

    Way, Lawrence W.; Stewart, Lygia; Gantert, Walter; Liu, Kingsway; Lee, Crystine M.; Whang, Karen; Hunter, John G.

    2003-01-01

    Objective To apply human performance concepts in an attempt to understand the causes of and prevent laparoscopic bile duct injury. Summary Background Data Powerful conceptual advances have been made in understanding the nature and limits of human performance. Applying these findings in high-risk activities, such as commercial aviation, has allowed the work environment to be restructured to substantially reduce human error. Methods The authors analyzed 252 laparoscopic bile duct injuries according to the principles of the cognitive science of visual perception, judgment, and human error. The injury distribution was class I, 7%; class II, 22%; class III, 61%; and class IV, 10%. The data included operative radiographs, clinical records, and 22 videotapes of original operations. Results The primary cause of error in 97% of cases was a visual perceptual illusion. Faults in technical skill were present in only 3% of injuries. Knowledge and judgment errors were contributory but not primary. Sixty-four injuries (25%) were recognized at the index operation; the surgeon identified the problem early enough to limit the injury in only 15 (6%). In class III injuries the common duct, erroneously believed to be the cystic duct, was deliberately cut. This stemmed from an illusion of object form due to a specific uncommon configuration of the structures and the heuristic nature (unconscious assumptions) of human visual perception. The videotapes showed the persuasiveness of the illusion, and many operative reports described the operation as routine. Class II injuries resulted from a dissection too close to the common hepatic duct. Fundamentally an illusion, it was contributed to in some instances by working too deep in the triangle of Calot. Conclusions These data show that errors leading to laparoscopic bile duct injuries stem principally from misperception, not errors of skill, knowledge, or judgment. The misperception was so compelling that in most cases the surgeon did not recognize a problem. Even when irregularities were identified, corrective feedback did not occur, which is characteristic of human thinking under firmly held assumptions. These findings illustrate the complexity of human error in surgery while simultaneously providing insights. They demonstrate that automatically attributing technical complications to behavioral factors that rely on the assumption of control is likely to be wrong. Finally, this study shows that there are only a few points within laparoscopic cholecystectomy where the complication-causing errors occur, which suggests that focused training to heighten vigilance might be able to decrease the incidence of bile duct injury. PMID:12677139

  8. INVOLVEMENT OF MULTIPLE MOLECULAR PATHWAYS IN THE GENETICS OF OCULAR REFRACTION AND MYOPIA.

    PubMed

    Wojciechowski, Robert; Cheng, Ching-Yu

    2018-01-01

    The prevalence of myopia has increased dramatically worldwide within the last three decades. Recent studies have shown that refractive development is influenced by environmental, behavioral, and inherited factors. This review aims to analyze recent progress in the genetics of refractive error and myopia. A comprehensive literature search of PubMed and OMIM was conducted to identify relevant articles in the genetics of refractive error. Genome-wide association and sequencing studies have increased our understanding of the genetics involved in refractive error. These studies have identified interesting candidate genes. All genetic loci discovered to date indicate that refractive development is a heterogeneous process mediated by a number of overlapping biological processes. The exact mechanisms by which these biological networks regulate eye growth are poorly understood. Although several individual genes and/or molecular pathways have been investigated in animal models, a systematic network-based approach in modeling human refractive development is necessary to understand the complex interplay between genes and environment in refractive error. New biomedical technologies and better-designed studies will continue to refine our understanding of the genetics and molecular pathways of refractive error, and may lead to preventative and therapeutic measures to combat the myopia epidemic.

  9. Managerial process improvement: a lean approach to eliminating medication delivery.

    PubMed

    Hussain, Aftab; Stewart, LaShonda M; Rivers, Patrick A; Munchus, George

    2015-01-01

    Statistical evidence shows that medication errors are a major cause of injuries that concerns all health care oganizations. Despite all the efforts to improve the quality of care, the lack of understanding and inability of management to design a robust system that will strategically target those factors is a major cause of distress. The paper aims to discuss these issues. Achieving optimum organizational performance requires two key variables; work process factors and human performance factors. The approach is that healthcare administrators must take in account both variables in designing a strategy to reduce medication errors. However, strategies that will combat such phenomena require that managers and administrators understand the key factors that are causing medication delivery errors. The authors recommend that healthcare organizations implement the Toyota Production System (TPS) combined with human performance improvement (HPI) methodologies to eliminate medication delivery errors in hospitals. Despite all the efforts to improve the quality of care, there continues to be a lack of understanding and the ability of management to design a robust system that will strategically target those factors associated with medication errors. This paper proposes a solution to an ambiguous workflow process using the TPS combined with the HPI system.

  10. Understanding the structural drivers governing glass-water interactions in borosilicate based model bioactive glasses.

    PubMed

    Stone-Weiss, Nicholas; Pierce, Eric M; Youngman, Randall E; Gulbiten, Ozgur; Smith, Nicholas J; Du, Jincheng; Goel, Ashutosh

    2018-01-01

    The past decade has witnessed a significant upsurge in the development of borate and borosilicate based resorbable bioactive glasses owing to their faster degradation rate in comparison to their silicate counterparts. However, due to our lack of understanding about the fundamental science governing the aqueous corrosion of these glasses, most of the borate/borosilicate based bioactive glasses reported in the literature have been designed by "trial-and-error" approach. With an ever-increasing demand for their application in treating a broad spectrum of non-skeletal health problems, it is becoming increasingly difficult to design advanced glass formulations using the same conventional approach. Therefore, a paradigm shift from the "trial-and-error" approach to "materials-by-design" approach is required to develop new-generations of bioactive glasses with controlled release of functional ions tailored for specific patients and disease states, whereby material functions and properties can be predicted from first principles. Realizing this goal, however, requires a thorough understanding of the complex sequence of reactions that control the dissolution kinetics of bioactive glasses and the structural drivers that govern them. While there is a considerable amount of literature published on chemical dissolution behavior and apatite-forming ability of potentially bioactive glasses, the majority of this literature has been produced on silicate glass chemistries using different experimental and measurement protocols. It follows that inter-comparison of different datasets reveals inconsistencies between experimental groups. There are also some major experimental challenges or choices that need to be carefully navigated to unearth the mechanisms governing the chemical degradation behavior and kinetics of boron-containing bioactive glasses, and to accurately determine the composition-structure-property relationships. In order to address these challenges, a simplified borosilicate based model melt-quenched bioactive glass system has been studied to depict the impact of thermal history on its molecular structure and dissolution behavior in water. It has been shown that the methodology of quenching of the glass melt impacts the dissolution rate of the studied glasses by 1.5×-3× depending on the changes induced in their molecular structure due to variation in thermal history. Further, a recommendation has been made to study dissolution behavior of bioactive glasses using surface area of the sample - to - volume of solution (SA/V) approach instead of the currently followed mass of sample - to - volume of solution approach. The structural and chemical dissolution data obtained from bioactive glasses following the approach presented in this paper can be used to develop the structural descriptors and potential energy functions over a broad range of bioactive glass compositions. Realizing the goal of designing third generation bioactive glasses requires a thorough understanding of the complex sequence of reactions that control their rate of degradation (in physiological fluids) and the structural drivers that control them. In this article, we have highlighted some major experimental challenges and choices that need to be carefully navigated in order to unearth the mechanisms governing the chemical dissolution behavior of borosilicate based bioactive glasses. The proposed experimental approach allows us to gain a new level of conceptual understanding about the composition-structure-property relationships in these glass systems, which can be applied to attain a significant leap in designing borosilicate based bioactive glasses with controlled dissolution rates tailored for specific patient and disease states. Copyright © 2017 Acta Materialia Inc. All rights reserved.

  11. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  12. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection

    ERIC Educational Resources Information Center

    Buschang, Rebecca Ellen

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  13. A Hands-On Exercise Improves Understanding of the Standard Error of the Mean

    ERIC Educational Resources Information Center

    Ryan, Robert S.

    2006-01-01

    One of the most difficult concepts for statistics students is the standard error of the mean. To improve understanding of this concept, 1 group of students used a hands-on procedure to sample from small populations representing either a true or false null hypothesis. The distribution of 120 sample means (n = 3) from each population had standard…

  14. Integrating prior information into microwave tomography part 2: Impact of errors in prior information on microwave tomography image quality.

    PubMed

    Kurrant, Douglas; Fear, Elise; Baran, Anastasia; LoVetri, Joe

    2017-12-01

    The authors have developed a method to combine a patient-specific map of tissue structure and average dielectric properties with microwave tomography. The patient-specific map is acquired with radar-based techniques and serves as prior information for microwave tomography. The impact that the degree of structural detail included in this prior information has on image quality was reported in a previous investigation. The aim of the present study is to extend this previous work by identifying and quantifying the impact that errors in the prior information have on image quality, including the reconstruction of internal structures and lesions embedded in fibroglandular tissue. This study also extends the work of others reported in literature by emulating a clinical setting with a set of experiments that incorporate heterogeneity into both the breast interior and glandular region, as well as prior information related to both fat and glandular structures. Patient-specific structural information is acquired using radar-based methods that form a regional map of the breast. Errors are introduced to create a discrepancy in the geometry and electrical properties between the regional map and the model used to generate the data. This permits the impact that errors in the prior information have on image quality to be evaluated. Image quality is quantitatively assessed by measuring the ability of the algorithm to reconstruct both internal structures and lesions embedded in fibroglandular tissue. The study is conducted using both 2D and 3D numerical breast models constructed from MRI scans. The reconstruction results demonstrate robustness of the method relative to errors in the dielectric properties of the background regional map, and to misalignment errors. These errors do not significantly influence the reconstruction accuracy of the underlying structures, or the ability of the algorithm to reconstruct malignant tissue. Although misalignment errors do not significantly impact the quality of the reconstructed fat and glandular structures for the 3D scenarios, the dielectric properties are reconstructed less accurately within the glandular structure for these cases relative to the 2D cases. However, general agreement between the 2D and 3D results was found. A key contribution of this paper is the detailed analysis of the impact of prior information errors on the reconstruction accuracy and ability to detect tumors. The results support the utility of acquiring patient-specific information with radar-based techniques and incorporating this information into MWT. The method is robust to errors in the dielectric properties of the background regional map, and to misalignment errors. Completion of this analysis is an important step toward developing the method into a practical diagnostic tool. © 2017 American Association of Physicists in Medicine.

  15. Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.

    PubMed

    Ding, Cherng G; Jane, Ten-Der

    2012-09-01

    In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.

  16. The Public Understanding of Error in Educational Assessment

    ERIC Educational Resources Information Center

    Gardner, John

    2013-01-01

    Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…

  17. Effective Compiler Error Message Enhancement for Novice Programming Students

    ERIC Educational Resources Information Center

    Becker, Brett A.; Glanville, Graham; Iwashima, Ricardo; McDonnell, Claire; Goslin, Kyle; Mooney, Catherine

    2016-01-01

    Programming is an essential skill that many computing students are expected to master. However, programming can be difficult to learn. Successfully interpreting compiler error messages (CEMs) is crucial for correcting errors and progressing toward success in programming. Yet these messages are often difficult to understand and pose a barrier to…

  18. Exploratory Factor Analysis of Reading, Spelling, and Math Errors

    ERIC Educational Resources Information Center

    O'Brien, Rebecca; Pan, Xingyu; Courville, Troy; Bray, Melissa A.; Breaux, Kristina; Avitia, Maria; Choi, Dowon

    2017-01-01

    Norm-referenced error analysis is useful for understanding individual differences in students' academic skill development and for identifying areas of skill strength and weakness. The purpose of the present study was to identify underlying connections between error categories across five language and math subtests of the Kaufman Test of…

  19. Structural power flow measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falter, K.J.; Keltie, R.F.

    Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors weremore » found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.« less

  20. A review of failure models for unidirectional ceramic matrix composites under monotonic loads

    NASA Technical Reports Server (NTRS)

    Tripp, David E.; Hemann, John H.; Gyekenyesi, John P.

    1989-01-01

    Ceramic matrix composites offer significant potential for improving the performance of turbine engines. In order to achieve their potential, however, improvements in design methodology are needed. In the past most components using structural ceramic matrix composites were designed by trial and error since the emphasis of feasibility demonstration minimized the development of mathematical models. To understand the key parameters controlling response and the mechanics of failure, the development of structural failure models is required. A review of short term failure models with potential for ceramic matrix composite laminates under monotonic loads is presented. Phenomenological, semi-empirical, shear-lag, fracture mechanics, damage mechanics, and statistical models for the fast fracture analysis of continuous fiber unidirectional ceramic matrix composites under monotonic loads are surveyed.

  1. Final Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josef Michl

    2011-10-31

    In this project we have established guidelines for the design on organic chromophores suitable for producing high triplet yields via singlet fission. We have proven their utility by identifying a chromophore of a structural class that had never been examined for singlet fission before, 1,3-diphenylisobenzofuran, and demonstrating in two independent ways that a thin layer of this material produces a triplet yield of 200% within experimental error. We have also designed a second chromophore of a very different type, again of a structural class that had not been examined for singlet fission before, and found that in a thin layermore » it produces a 70% triplet yield. Finally, we have enhanced the theoretical understanding of the quantum mechanical nature of the singlet fission process.« less

  2. Challenges and Opportunities of Long-Term Continuous Stream Metabolism Measurements at the National Ecological Observatory Network

    NASA Astrophysics Data System (ADS)

    Goodman, K. J.; Lunch, C. K.; Baxter, C.; Hall, R.; Holtgrieve, G. W.; Roberts, B. J.; Marcarelli, A. M.; Tank, J. L.

    2013-12-01

    Recent advances in dissolved oxygen sensing and modeling have made continuous measurements of whole-stream metabolism relatively easy to make, allowing ecologists to quantify and evaluate stream ecosystem health at expanded temporal and spatial scales. Long-term monitoring of continuous stream metabolism will enable a better understanding of the integrated and complex effects of anthropogenic change (e.g., land-use, climate, atmospheric deposition, invasive species, etc.) on stream ecosystem function. In addition to their value in the particular streams measured, information derived from long-term data will improve the ability to extrapolate from shorter-term data. With the need to better understand drivers and responses of whole-stream metabolism come difficulties in interpreting the results. Long-term trends will encompass physical changes in stream morphology and flow regime (e.g., variable flow conditions and changes in channel structure) combined with changes in biota. Additionally long-term data sets will require an organized database structure, careful quantification of errors and uncertainties, as well as propagation of error as a result of the calculation of metabolism metrics. Parsing of continuous data and the choice of modeling approaches can also have a large influence on results and on error estimation. The two main modeling challenges include 1) obtaining unbiased, low-error daily estimates of gross primary production (GPP) and ecosystem respiration (ER), and 2) interpreting GPP and ER measurements over extended time periods. The National Ecological Observatory Network (NEON), in partnership with academic and government scientists, has begun to tackle several of these challenges as it prepares for the collection and calculation of 30 years of continuous whole-stream metabolism data. NEON is a national-scale research platform that will use consistent procedures and protocols to standardize measurements across the United States, providing long-term, high-quality, open-access data from a connected network to address large-scale change. NEON infrastructure will support 36 aquatic sites across 19 ecoclimatic domains. Sites include core sites, which remain for 30 years, and relocatable sites, which move to capture regional gradients. NEON will measure continuous whole-stream metabolism in conjunction with aquatic, terrestrial and airborne observations, allowing researchers to link stream ecosystem function with landscape and climatic drivers encompassing short to long time periods (i.e., decades).

  3. Medication errors with electronic prescribing (eP): Two views of the same picture

    PubMed Central

    2010-01-01

    Background Quantitative prospective methods are widely used to evaluate the impact of new technologies such as electronic prescribing (eP) on medication errors. However, they are labour-intensive and it is not always feasible to obtain pre-intervention data. Our objective was to compare the eP medication error picture obtained with retrospective quantitative and qualitative methods. Methods The study was carried out at one English district general hospital approximately two years after implementation of an integrated electronic prescribing, administration and records system. Quantitative: A structured retrospective analysis was carried out of clinical records and medication orders for 75 randomly selected patients admitted to three wards (medicine, surgery and paediatrics) six months after eP implementation. Qualitative: Eight doctors, 6 nurses, 8 pharmacy staff and 4 other staff at senior, middle and junior grades, and 19 adult patients on acute surgical and medical wards were interviewed. Staff interviews explored experiences of developing and working with the system; patient interviews focused on experiences of medicine prescribing and administration on the ward. Interview transcripts were searched systematically for accounts of medication incidents. A classification scheme was developed and applied to the errors identified in the records review. Results The two approaches produced similar pictures of the drug use process. Interviews identified types of error identified in the retrospective notes review plus two eP-specific errors which were not detected by record review. Interview data took less time to collect than record review, and provided rich data on the prescribing process, and reasons for delays or non-administration of medicines, including "once only" orders and "as required" medicines. Conclusions The qualitative approach provided more understanding of processes, and some insights into why medication errors can happen. The method is cost-effective and could be used to supplement information from anonymous error reporting schemes. PMID:20497532

  4. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  5. Fully Mechanically Controlled Automated Electron Microscopic Tomography

    DOE PAGES

    Liu, Jinxin; Li, Hongchang; Zhang, Lei; ...

    2016-07-11

    Knowledge of three-dimensional (3D) structures of each individual particles of asymmetric and flexible proteins is essential in understanding those proteins' functions; but their structures are difficult to determine. Electron tomography (ET) provides a tool for imaging a single and unique biological object from a series of tilted angles, but it is challenging to image a single protein for three-dimensional (3D) reconstruction due to the imperfect mechanical control capability of the specimen goniometer under both a medium to high magnification (approximately 50,000-160,000×) and an optimized beam coherence condition. Here, we report a fully mechanical control method for automating ET data acquisitionmore » without using beam tilt/shift processes. This method could reduce the accumulation of beam tilt/shift that used to compensate the error from the mechanical control, but downgraded the beam coherence. Our method was developed by minimizing the error of the target object center during the tilting process through a closed-loop proportional-integral (PI) control algorithm. The validations by both negative staining (NS) and cryo-electron microscopy (cryo-EM) suggest that this method has a comparable capability to other ET methods in tracking target proteins while maintaining optimized beam coherence conditions for imaging.« less

  6. Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods

    NASA Astrophysics Data System (ADS)

    Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric

    2018-03-01

    Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.

  7. Accounting for Relatedness in Family Based Genetic Association Studies

    PubMed Central

    McArdle, P.F.; O’Connell, J.R.; Pollin, T.I.; Baumgarten, M.; Shuldiner, A.R.; Peyser, P.A.; Mitchell, B.D.

    2007-01-01

    Objective Assess the differences in point estimates, power and type 1 error rates when accounting for and ignoring family structure in genetic tests of association. Methods We compare by simulation the performance of analytic models using variance components to account for family structure and regression models that ignore relatedness for a range of possible family based study designs (i.e., sib pairs vs. large sibships vs. nuclear families vs. extended families). Results Our analyses indicate that effect size estimates and power are not significantly affected by ignoring family structure. Type 1 error rates increase when family structure is ignored, as density of family structures increases, and as trait heritability increases. For discrete traits with moderate levels of heritability and across many common sampling designs, type 1 error rates rise from a nominal 0.05 to 0.11. Conclusion Ignoring family structure may be useful in screening although it comes at a cost of a increased type 1 error rate, the magnitude of which depends on trait heritability and pedigree configuration. PMID:17570925

  8. Impact of cell size on inventory and mapping errors in a cellular geographic information system

    NASA Technical Reports Server (NTRS)

    Wehde, M. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.

  9. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection. CRESST Report 822

    ERIC Educational Resources Information Center

    Buschang, Rebecca E.

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  10. A Case Study of Teacher Responses to a Doubling Error and Difficulty in Learning Equivalent Fractions

    ERIC Educational Resources Information Center

    Ding, Meixia; Li, Xiaobao; Capraro, Mary Margaret; Kulm, Gerald

    2012-01-01

    This study qualitatively explored teachers' responses to doubling errors (e.g., 3/4 x 2 = 6/8) that typically reflect students' difficulties in understanding the "rule" for finding equivalent fractions (e.g., 3/4 x 2/2 = 6/8). Although all teachers claimed to teach for understanding in interviews, their responses varied in terms of effectiveness…

  11. Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set

    USGS Publications Warehouse

    Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.

    1996-01-01

    This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.

  12. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  13. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.

    2004-01-01

    The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

  14. Effects of structural error on the estimates of parameters of dynamical systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.

  15. System for Configuring Modular Telemetry Transponders

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta A. (Inventor); Sims, William Herbert, III (Inventor)

    2014-01-01

    A system for configuring telemetry transponder cards uses a database of error checking protocol data structures, each containing data to implement at least one CCSDS protocol algorithm. Using a user interface, a user selects at least one telemetry specific error checking protocol from the database. A compiler configures an FPGA with the data from the data structures to implement the error checking protocol.

  16. Optimization of processing parameters of UAV integral structural components based on yield response

    NASA Astrophysics Data System (ADS)

    Chen, Yunsheng

    2018-05-01

    In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.

  17. Targeting Neuroblastoma Cell Surface Proteins: Recommendations for Homology Modeling of hNET, ALK, and TrkB.

    PubMed

    Haddad, Yazan; Heger, Zbyněk; Adam, Vojtech

    2017-01-01

    Targeted therapy is a promising approach for treatment of neuroblastoma as evident from the large number of targeting agents employed in clinical practice today. In the absence of known crystal structures, researchers rely on homology modeling to construct template-based theoretical structures for drug design and testing. Here, we discuss three candidate cell surface proteins that are suitable for homology modeling: human norepinephrine transporter (hNET), anaplastic lymphoma kinase (ALK), and neurotrophic tyrosine kinase receptor 2 (NTRK2 or TrkB). When choosing templates, both sequence identity and structure quality are important for homology modeling and pose the first of many challenges in the modeling process. Homology modeling of hNET can be improved using template models of dopamine and serotonin transporters instead of the leucine transporter (LeuT). The extracellular domains of ALK and TrkB are yet to be exploited by homology modeling. There are several idiosyncrasies that require direct attention throughout the process of model construction, evaluation and refinement. Shifts/gaps in the alignment between the template and target, backbone outliers and side-chain rotamer outliers are among the main sources of physical errors in the structures. Low-conserved regions can be refined with loop modeling method. Residue hydrophobicity, accessibility to bound metals or glycosylation can aid in model refinement. We recommend resolving these idiosyncrasies as part of "good modeling practice" to obtain highest quality model. Decreasing physical errors in protein structures plays major role in the development of targeting agents and understanding of chemical interactions at the molecular level.

  18. Comparison between a typical and a simplified model for blast load-induced structural response

    NASA Astrophysics Data System (ADS)

    Abd-Elhamed, A.; Mahmoud, S.

    2017-02-01

    As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.

  19. Truths, errors, and lies around "reflex sympathetic dystrophy" and "complex regional pain syndrome".

    PubMed

    Ochoa, J L

    1999-10-01

    The shifting paradigm of reflex sympathetic dystrophy-sympathetically maintained pains-complex regional pain syndrome is characterized by vestigial truths and understandable errors, but also unjustifiable lies. It is true that patients with organically based neuropathic pain harbor unquestionable and physiologically demonstrable evidence of nerve fiber dysfunction leading to a predictable clinical profile with stereotyped temporal evolution. In turn, patients with psychogenic pseudoneuropathy, sustained by conversion-somatization-malingering, not only lack physiological evidence of structural nerve fiber disease but display a characteristically atypical, half-subjective, psychophysical sensory-motor profile. The objective vasomotor signs may have any variety of neurogenic, vasogenic, and psychogenic origins. Neurological differential diagnosis of "neuropathic pain" versus pseudoneuropathy is straight forward provided that stringent requirements of neurological semeiology are not bypassed. Embarrassing conceptual errors explain the assumption that there exists a clinically relevant "sympathetically maintained pain" status. Errors include historical misinterpretation of vasomotor signs in symptomatic body parts, and misconstruing symptomatic relief after "diagnostic" sympathetic blocks, due to lack of consideration of the placebo effect which explains the outcome. It is a lie that sympatholysis may specifically cure patients with unqualified "reflex sympathetic dystrophy." This was already stated by the father of sympathectomy, René Leriche, more than half a century ago. As extrapolated from observations in animals with gross experimental nerve injury, adducing hypothetical, untestable, secondary central neuron sensitization to explain psychophysical sensory-motor complaints displayed by patients with blatantly absent nerve fiber injury, is not an error, but a lie. While conceptual errors are not only forgivable, but natural to inexact medical science, lies particularly when entrepreneurially inspired are condemnable and call for peer intervention.

  20. Avoiding Substantive Errors in Individualized Education Program Development

    ERIC Educational Resources Information Center

    Yell, Mitchell L.; Katsiyannis, Antonis; Ennis, Robin Parks; Losinski, Mickey; Christle, Christine A.

    2016-01-01

    The purpose of this article is to discuss major substantive errors that school personnel may make when developing students' Individualized Education Programs (IEPs). School IEP team members need to understand the importance of the procedural and substantive requirements of the IEP, have an awareness of the five serious substantive errors that IEP…

  1. Comparing Graphical and Verbal Representations of Measurement Error in Test Score Reports

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Zapata-Rivera, Diego; Hegarty, Mary

    2014-01-01

    Research has shown that many educators do not understand the terminology or displays used in test score reports and that measurement error is a particularly challenging concept. We investigated graphical and verbal methods of representing measurement error associated with individual student scores. We created four alternative score reports, each…

  2. Error Covariance Penalized Regression: A novel multivariate model combining penalized regression with multivariate error structure.

    PubMed

    Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C

    2018-06-29

    A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Integrating automated structured analysis and design with Ada programming support environments

    NASA Technical Reports Server (NTRS)

    Hecht, Alan; Simmons, Andy

    1986-01-01

    Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.

  4. An evaluation of space time cube representation of spatiotemporal patterns.

    PubMed

    Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine

    2009-01-01

    Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.

  5. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  6. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    PubMed

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    PubMed

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  8. Prevalence of medication errors in primary health care at Bahrain Defence Force Hospital – prescription-based study

    PubMed Central

    Aljasmi, Fatema; Almalood, Fatema

    2018-01-01

    Background One of the important activities that physicians – particularly general practitioners – perform is prescribing. It occurs in most health care facilities and especially in primary health care (PHC) settings. Objectives This study aims to determine what types of prescribing errors are made in PHC at Bahrain Defence Force (BDF) Hospital, and how common they are. Methods This was a retrospective study of data from PHC at BDF Hospital. The data consisted of 379 prescriptions randomly selected from the pharmacy between March and May 2013, and errors in the prescriptions were classified into five types: major omission, minor omission, commission, integration, and skill-related errors. Results Of the total prescriptions, 54.4% (N=206) were given to male patients and 45.6% (N=173) to female patients; 24.8% were given to patients under the age of 10 years. On average, there were 2.6 drugs per prescription. In the prescriptions, 8.7% of drugs were prescribed by their generic names, and 28% (N=106) of prescriptions included an antibiotic. Out of the 379 prescriptions, 228 had an error, and 44.3% (N=439) of the 992 prescribed drugs contained errors. The proportions of errors were as follows: 9.9% (N=38) were minor omission errors; 73.6% (N=323) were major omission errors; 9.3% (N=41) were commission errors; and 17.1% (N=75) were skill-related errors. Conclusion This study provides awareness of the presence of prescription errors and frequency of the different types of errors that exist in this hospital. Understanding the different types of errors could help future studies explore the causes of specific errors and develop interventions to reduce them. Further research should be conducted to understand the causes of these errors and demonstrate whether the introduction of electronic prescriptions has an effect on patient outcomes. PMID:29445304

  9. Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error

    PubMed Central

    Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee

    2017-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146

  10. 3D Complex: A Structural Classification of Protein Complexes

    PubMed Central

    Levy, Emmanuel D; Pereira-Leal, Jose B; Chothia, Cyrus; Teichmann, Sarah A

    2006-01-01

    Most of the proteins in a cell assemble into complexes to carry out their function. It is therefore crucial to understand the physicochemical properties as well as the evolution of interactions between proteins. The Protein Data Bank represents an important source of information for such studies, because more than half of the structures are homo- or heteromeric protein complexes. Here we propose the first hierarchical classification of whole protein complexes of known 3-D structure, based on representing their fundamental structural features as a graph. This classification provides the first overview of all the complexes in the Protein Data Bank and allows nonredundant sets to be derived at different levels of detail. This reveals that between one-half and two-thirds of known structures are multimeric, depending on the level of redundancy accepted. We also analyse the structures in terms of the topological arrangement of their subunits and find that they form a small number of arrangements compared with all theoretically possible ones. This is because most complexes contain four subunits or less, and the large majority are homomeric. In addition, there is a strong tendency for symmetry in complexes, even for heteromeric complexes. Finally, through comparison of Biological Units in the Protein Data Bank with the Protein Quaternary Structure database, we identified many possible errors in quaternary structure assignments. Our classification, available as a database and Web server at http://www.3Dcomplex.org, will be a starting point for future work aimed at understanding the structure and evolution of protein complexes. PMID:17112313

  11. Quantitative analysis of the radiation error for aerial coiled-fiber-optic distributed temperature sensing deployments using reinforcing fabric as support structure

    NASA Astrophysics Data System (ADS)

    Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.

    2017-06-01

    In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K). Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.

  12. Modeling error in experimental assays using the bootstrap principle: Understanding discrepancies between assays using different dispensing technologies

    PubMed Central

    Hanson, Sonya M.; Ekins, Sean; Chodera, John D.

    2015-01-01

    All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597

  13. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  14. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  15. System design and verification of the precession electron diffraction technique

    NASA Astrophysics Data System (ADS)

    Own, Christopher Su-Yan

    2005-07-01

    Bulk structural crystallography is generally a two-part process wherein a rough starting structure model is first derived, then later refined to give an accurate model of the structure. The critical step is the determination of the initial model. As materials problems decrease in length scale, the electron microscope has proven to be a versatile and effective tool for studying many problems. However, study of complex bulk structures by electron diffraction has been hindered by the problem of dynamical diffraction. This phenomenon makes bulk electron diffraction very sensitive to specimen thickness, and expensive equipment such as aberration-corrected scanning transmission microscopes or elaborate methodology such as high resolution imaging combined with diffraction and simulation are often required to generate good starting structures. The precession electron diffraction technique (PED), which has the ability to significantly reduce dynamical effects in diffraction patterns, has shown promise as being a "philosopher's stone" for bulk electron diffraction. However, a comprehensive understanding of its abilities and limitations is necessary before it can be put into widespread use as a standalone technique. This thesis aims to bridge the gaps in understanding and utilizing precession so that practical application might be realized. Two new PED systems have been built, and optimal operating parameters have been elucidated. The role of lens aberrations is described in detail, and an alignment procedure is given that shows how to circumvent aberration in order to obtain high-quality patterns. Multislice simulation is used for investigating the errors inherent in precession, and is also used as a reference for comparison to simple models and to experimental PED data. General trends over a large sampling of parameter space are determined. In particular, we show that the primary reflection intensity errors occur near the transmitted beam and decay with increasing angle and decreasing specimen thickness. These errors, occurring at the lowest spatial frequencies, fortuitously coincide with reflections for which phases are easiest to determine via imaging methods. A general two-beam dynamical model based upon an existing approximate model is found to be fairly accurate across most experimental conditions, particularly where it is needed for providing a correction to distorted data. Finally, the practical structure solution procedure using PED is demonstrated for several model material systems. Of the experiment parameters investigated, the cone semi-angle is found to be the most important (it should be as large as possible), followed closely by specimen thickness (thinner is better). Assuming good structure projection characteristics in the specimen, the thickness tractable by PED is extended to 40-50 nm without correction, demonstrated for complex oxides. With a forward calculation based upon the two-beam dynamical model (using known structure factors), usable specimen thickness can be extended past 150 nm. For a priori correction, using the squared amplitudes approximates the two-beam model for most thicknesses if the scattering from the structure adheres to psuedo-kinematical behavior. Practically, crystals up to 60 nm in thickness can now be processed by the precession methods developed in this thesis.

  16. Error analysis and correction of discrete solutions from finite element codes

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.

    1984-01-01

    Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.

  17. Syntactical and Punctuation Errors: An Analysis of Technical Writing of University Students Science College, Taif University, KSA

    ERIC Educational Resources Information Center

    Alamin, Abdulamir; Ahmed, Sawsan

    2012-01-01

    Analyzing errors committed by second language learners during their first year of study at the University of Taif, can offer insights and knowledge of the learners' difficulties in acquiring technical English communication. With reference to the errors analyzed, the researcher found that the learners' failure to understand basic English grammar…

  18. Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students

    ERIC Educational Resources Information Center

    Malone, Amelia Schneider; Fuchs, Lynn S.

    2015-01-01

    The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of…

  19. Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students

    ERIC Educational Resources Information Center

    Malone, Amelia S.; Fuchs, Lynn S.

    2017-01-01

    The three purposes of this study were to (a) describe fraction ordering errors among at-risk fourth grade students, (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors, and (c) examine the effect of students' ability to explain comparing problems on the probability…

  20. Pre-University Students' Errors in Integration of Rational Functions and Implications for Classroom Teaching

    ERIC Educational Resources Information Center

    Yee, Ng Kin; Lam, Toh Tin

    2008-01-01

    This paper reports on students' errors in performing integration of rational functions, a topic of calculus in the pre-university mathematics classrooms. Generally the errors could be classified as those due to the students' weak algebraic concepts and their lack of understanding of the concept of integration. With the students' inability to link…

  1. Relationship auditing of the FMA ontology

    PubMed Central

    Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai

    2010-01-01

    The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727

  2. Errors Analysis of Students in Mathematics Department to Learn Plane Geometry

    NASA Astrophysics Data System (ADS)

    Mirna, M.

    2018-04-01

    This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.

  3. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  4. Characterization of measurement errors using structure-from-motion and photogrammetry to measure marine habitat structural complexity.

    PubMed

    Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria

    2017-08-01

    Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.

  5. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.

  6. On codes with multi-level error-correction capabilities

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.

  7. GPS measurement error gives rise to spurious 180 degree turning angles and strong directional biases in animal movement data.

    PubMed

    Hurford, Amy

    2009-05-20

    Movement data are frequently collected using Global Positioning System (GPS) receivers, but recorded GPS locations are subject to errors. While past studies have suggested methods to improve location accuracy, mechanistic movement models utilize distributions of turning angles and directional biases and these data present a new challenge in recognizing and reducing the effect of measurement error. I collected locations from a stationary GPS collar, analyzed a probabilistic model and used Monte Carlo simulations to understand how measurement error affects measured turning angles and directional biases. Results from each of the three methods were in complete agreement: measurement error gives rise to a systematic bias where a stationary animal is most likely to be measured as turning 180 degrees or moving towards a fixed point in space. These spurious effects occur in GPS data when the measured distance between locations is <20 meters. Measurement error must be considered as a possible cause of 180 degree turning angles in GPS data. Consequences of failing to account for measurement error are predicting overly tortuous movement, numerous returns to previously visited locations, inaccurately predicting species range, core areas, and the frequency of crossing linear features. By understanding the effect of GPS measurement error, ecologists are able to disregard false signals to more accurately design conservation plans for endangered wildlife.

  8. Comment on "Density functional theory analysis of structural and electronic properties of orthorhombic perovskite CH3NH3PbI3" by Y. Wang et al., Phys. Chem. Chem. Phys., 2014, 16, 1424-1429.

    PubMed

    Even, J; Pedesseau, L; Katan, C

    2014-05-14

    Yun Wang et al. used density functional theory (DFT) to investigate the orthorhombic phase of CH3NH3PbI3, which has recently shown outstanding properties for photovoltaic applications. Whereas their analysis of ground state properties may represent a valuable contribution to understanding this class of materials, effects of spin-orbit coupling (SOC) cannot be overlooked as was shown in earlier studies. Moreover, their discussion on optical properties may be misleading for non-DFT-experts, and the nice agreement between experimental and calculated band gap is fortuitous, stemming from error cancellations between SOC and many-body effects. Lastly, Bader charges suggest potential problems during crystal structure optimization.

  9. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  10. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders.

    PubMed

    Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise

    2013-05-01

    To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.

  11. UNDERSTANDING OR NURSES' REACTIONS TO ERRORS AND USING THIS UNDERSTANDING TO IMPROVE PATIENT SAFETY.

    PubMed

    Taifoori, Ladan; Valiee, Sina

    2015-09-01

    The operating room can be home to many different types of nursing errors due to the invasiveness of OR procedures. The nurses' reactions towards errors can be a key factor in patient safety. This article is based on a study, with the aim of investigating nurses' reactions toward nursing errors and the various contributing and resulting factors, conducted at Kurdistan University of Medical Sciences in Sanandaj, Iran in 2014. The goal of the study was to determine how OR nurses' reacted to nursing errors with the goal of having this information used to improve patient safety. Research was conducted as a cross-sectional descriptive study. The participants were all nurses employed in the operating rooms of the teaching hospitals of Kurdistan University of Medical Sciences, which was selected by a consensus method (170 persons). The information was gathered through questionnaires that focused on demographic information, error definition, reasons for error occurrence, and emotional reactions for error occurrence, and emotional reactions toward the errors. 153 questionnaires were completed and analyzed by SPSS software version 16.0. "Not following sterile technique" (82.4 percent) was the most reported nursing error, "tiredness" (92.8 percent) was the most reported reason for the error occurrence, "being upset at having harmed the patient" (85.6 percent) was the most reported emotional reaction after error occurrence", with "decision making for a better approach to tasks the next time" (97.7 percent) as the most common goal and "paying more attention to details" (98 percent) was the most reported planned strategy for future improved outcomes. While healthcare facilities are focused on planning for the prevention and elimination of errors it was shown that nurses can also benefit from support after error occurrence. Their reactions, and coping strategies, need guidance and, with both individual and organizational support, can be a factor in improving patient safety.

  12. Attitude error response of structures to actuator/sensor noise

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1991-01-01

    Explicit closed-form formulas are presented for the RMS attitude-error response to sensor and actuator noise for co-located actuators/sensors as a function of both control-gain parameters and structure parameters. The main point of departure is the use of continuum models. In particular the anisotropic Timoshenko model is used for lattice trusses typified by the NASA EPS Structure Model and the Evolutionary Model. One conclusion is that the maximum attainable improvement in the attitude error varying either structure parameters or control gains is 3 dB for the axial and torsion modes, the bending being essentially insensitive. The results are similar whether the Bernoulli model or the anisotropic Timoshenko model is used.

  13. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  14. Seeing Central African forests through their largest trees

    PubMed Central

    Bastin, J.-F.; Barbier, N.; Réjou-Méchain, M.; Fayolle, A.; Gourlet-Fleury, S.; Maniatis, D.; de Haulleville, T.; Baya, F.; Beeckman, H.; Beina, D.; Couteron, P.; Chuyong, G.; Dauby, G.; Doucet, J.-L.; Droissart, V.; Dufrêne, M.; Ewango, C.; Gillet, J.F.; Gonmadje, C.H.; Hart, T.; Kavali, T.; Kenfack, D.; Libalah, M.; Malhi, Y.; Makana, J.-R.; Pélissier, R.; Ploton, P.; Serckx, A.; Sonké, B.; Stevart, T.; Thomas, D.W.; De Cannière, C.; Bogaert, J.

    2015-01-01

    Large tropical trees and a few dominant species were recently identified as the main structuring elements of tropical forests. However, such result did not translate yet into quantitative approaches which are essential to understand, predict and monitor forest functions and composition over large, often poorly accessible territories. Here we show that the above-ground biomass (AGB) of the whole forest can be predicted from a few large trees and that the relationship is proved strikingly stable in 175 1-ha plots investigated across 8 sites spanning Central Africa. We designed a generic model predicting AGB with an error of 14% when based on only 5% of the stems, which points to universality in forest structural properties. For the first time in Africa, we identified some dominant species that disproportionally contribute to forest AGB with 1.5% of recorded species accounting for over 50% of the stock of AGB. Consequently, focusing on large trees and dominant species provides precise information on the whole forest stand. This offers new perspectives for understanding the functioning of tropical forests and opens new doors for the development of innovative monitoring strategies. PMID:26279193

  15. Seeing Central African forests through their largest trees.

    PubMed

    Bastin, J-F; Barbier, N; Réjou-Méchain, M; Fayolle, A; Gourlet-Fleury, S; Maniatis, D; de Haulleville, T; Baya, F; Beeckman, H; Beina, D; Couteron, P; Chuyong, G; Dauby, G; Doucet, J-L; Droissart, V; Dufrêne, M; Ewango, C; Gillet, J F; Gonmadje, C H; Hart, T; Kavali, T; Kenfack, D; Libalah, M; Malhi, Y; Makana, J-R; Pélissier, R; Ploton, P; Serckx, A; Sonké, B; Stevart, T; Thomas, D W; De Cannière, C; Bogaert, J

    2015-08-17

    Large tropical trees and a few dominant species were recently identified as the main structuring elements of tropical forests. However, such result did not translate yet into quantitative approaches which are essential to understand, predict and monitor forest functions and composition over large, often poorly accessible territories. Here we show that the above-ground biomass (AGB) of the whole forest can be predicted from a few large trees and that the relationship is proved strikingly stable in 175 1-ha plots investigated across 8 sites spanning Central Africa. We designed a generic model predicting AGB with an error of 14% when based on only 5% of the stems, which points to universality in forest structural properties. For the first time in Africa, we identified some dominant species that disproportionally contribute to forest AGB with 1.5% of recorded species accounting for over 50% of the stock of AGB. Consequently, focusing on large trees and dominant species provides precise information on the whole forest stand. This offers new perspectives for understanding the functioning of tropical forests and opens new doors for the development of innovative monitoring strategies.

  16. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    ERIC Educational Resources Information Center

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  17. Drug error in paediatric anaesthesia: current status and where to go now.

    PubMed

    Anderson, Brian J

    2018-06-01

    Medication errors in paediatric anaesthesia and the perioperative setting continue to occur despite widespread recognition of the problem and published advice for reduction of this predicament at international, national, local and individual levels. Current literature was reviewed to ascertain drug error rates and to appraise causes and proposed solutions to reduce these errors. The medication error incidence remains high. There is documentation of reduction through identification of causes with consequent education and application of safety analytics and quality improvement programs in anaesthesia departments. Children remain at higher risk than adults because of additional complexities such as drug dose calculations, increased susceptibility to some adverse effects and changes associated with growth and maturation. Major improvements are best made through institutional system changes rather than a commitment to do better on the part of each practitioner. Medication errors in paediatric anaesthesia represent an important risk to children and most are avoidable. There is now an understanding of the genesis of adverse drug events and this understanding should facilitate the implementation of known effective countermeasures. An institution-wide commitment and strategy are the basis for a worthwhile and sustained improvement in medication safety.

  18. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  19. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  20. Interprofessional communication and medical error: a reframing of research questions and approaches.

    PubMed

    Varpio, Lara; Hall, Pippa; Lingard, Lorelei; Schryer, Catherine F

    2008-10-01

    Progress toward understanding the links between interprofessional communication and issues of medical error has been slow. Recent research proposes that this delay may result from overlooking the complexities involved in interprofessional care. Medical education initiatives in this domain tend to simplify the complexities of team membership fluidity, rotation, and use of communication tools. A new theoretically informed research approach is required to take into account these complexities. To generate such an approach, we review two theories from the social sciences: Activity Theory and Knotworking. Using these perspectives, we propose that research into interprofessional communication and medical error can develop better understandings of (1) how and why medical errors are generated and (2) how and why gaps in team defenses occur. Such complexities will have to be investigated if students and practicing clinicians are to be adequately prepared to work safely in interprofessional teams.

  1. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    ERIC Educational Resources Information Center

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  2. Error in the determination of the deformed shape of prismatic beams using the double integration of curvature

    NASA Astrophysics Data System (ADS)

    Sigurdardottir, Dorotea H.; Stearns, Jett; Glisic, Branko

    2017-07-01

    The deformed shape is a consequence of loading the structure and it is defined by the shape of the centroid line of the beam after deformation. The deformed shape is a universal parameter of beam-like structures. It is correlated with the curvature of the cross-section; therefore, any unusual behavior that affects the curvature is reflected through the deformed shape. Excessive deformations cause user discomfort, damage to adjacent structural members, and may ultimately lead to issues in structural safety. However, direct long-term monitoring of the deformed shape in real-life settings is challenging, and an alternative is indirect determination of the deformed shape based on curvature monitoring. The challenge of the latter is an accurate evaluation of error in the deformed shape determination, which is directly correlated with the number of sensors needed to achieve the desired accuracy. The aim of this paper is to study the deformed shape evaluated by numerical double integration of the monitored curvature distribution along the beam, and create a method to predict the associated errors and suggest the number of sensors needed to achieve the desired accuracy. The error due to the accuracy in the curvature measurement is evaluated within the scope of this work. Additionally, the error due to the numerical integration is evaluated. This error depends on the load case (i.e., the shape of the curvature diagram), the magnitude of curvature, and the density of the sensor network. The method is tested on a laboratory specimen and a real structure. In a laboratory setting, the double integration is in excellent agreement with the beam theory solution which was within the predicted error limits of the numerical integration. Consistent results are also achieved on a real structure—Streicker Bridge on Princeton University campus.

  3. Strength conditions for the elastic structures with a stress error

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2017-10-01

    As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.

  4. Understanding reliance on automation: effects of error type, error distribution, age and experience

    PubMed Central

    Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka

    2015-01-01

    An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142

  5. Understanding reliance on automation: effects of error type, error distribution, age and experience.

    PubMed

    Sanchez, Julian; Rogers, Wendy A; Fisk, Arthur D; Rovira, Ericka

    2014-03-01

    An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.

  6. Future capabilities of CME polarimetric 3D reconstructions with the METIS instrument: A numerical test

    NASA Astrophysics Data System (ADS)

    Pagano, P.; Bemporad, A.; Mackay, D. H.

    2015-10-01

    Context. Understanding the 3D structure of coronal mass ejections (CMEs) is crucial for understanding the nature and origin of solar eruptions. However, owing to the optical thinness of the solar corona we can only observe the line of sight integrated emission. As a consequence the resulting projection effects hide the true 3D structure of CMEs. To derive information on the 3D structure of CMEs from white-light (total and polarized brightness) images, the polarization ratio technique is widely used. The soon-to-be-launched METIS coronagraph on board Solar Orbiter will use this technique to produce new polarimetric images. Aims: This work considers the application of the polarization ratio technique to synthetic CME observations from METIS. In particular we determine the accuracy at which the position of the centre of mass, direction and speed of propagation, and the column density of the CME can be determined along the line of sight. Methods: We perform a 3D MHD simulation of a flux rope ejection where a CME is produced. From the simulation we (i) synthesize the corresponding METIS white-light (total and polarized brightness) images and (ii) apply the polarization ratio technique to these synthesized images and compare the results with the known density distribution from the MHD simulation. In addition, we use recent results that consider how the position of a single blob of plasma is measured depending on its projected position in the plane of the sky. From this we can interpret the results of the polarization ratio technique and give an estimation of the error associated with derived parameters. Results: We find that the polarization ratio technique reproduces with high accuracy the position of the centre of mass along the line of sight. However, some errors are inherently associated with this determination. The polarization ratio technique also allows information to be derived on the real 3D direction of propagation of the CME. The determination of this is of fundamental importance for future space weather forecasting. In addition, we find that the column density derived from white-light images is accurate and we propose an improved technique where the combined use of the polarization ratio technique and white-light images minimizes the error in the estimation of column densities. Moreover, by applying the comparison to a set of snapshots of the simulation we can also assess the errors related to the trajectory and the expansion of the CME. Conclusions: Our method allows us to thoroughly test the performance of the polarization ratio technique and allows a determination of the errors associated with it, which means that it can be used to quantify the results from the analysis of the forthcoming METIS observations in white light (total and polarized brightness). Finally, we describe a satellite observing configuration relative to the Earth that can allow the technique to be efficiently used for space weather predictions. A movie attached to Fig. 15 is available in electronic form at http://www.aanda.org

  7. Variation in printed handoff documents: Results and recommendations from a multicenter needs assessment.

    PubMed

    Rosenbluth, Glenn; Bale, James F; Starmer, Amy J; Spector, Nancy D; Srivastava, Rajendu; West, Daniel C; Sectish, Theodore C; Landrigan, Christopher P

    2015-08-01

    Handoffs of patient care are a leading root cause of medical errors. Standardized techniques exist to minimize miscommunications during verbal handoffs, but studies to guide standardization of printed handoff documents are lacking. To determine whether variability exists in the content of printed handoff documents and to identify key data elements that should be uniformly included in these documents. Pediatric hospitalist services at 9 institutions in the United States and Canada. Sample handoff documents from each institution were reviewed, and structured group interviews were conducted to understand each institution's priorities for written handoffs. An expert panel reviewed all handoff documents and structured group-interview findings, and subsequently made consensus-based recommendations for data elements that were either essential or recommended, including best overall printed handoff practices. Nine sites completed structured group interviews and submitted data. We identified substantial variation in both the structure and content of printed handoff documents. Only 4 of 23 possible data elements (17%) were uniformly present in all sites' handoff documents. The expert panel recommended the following as essential for all printed handoffs: assessment of illness severity, patient summary, action items, situation awareness and contingency plans, allergies, medications, age, weight, date of admission, and patient and hospital service identifiers. Code status and several other elements were also recommended. Wide variation exists in the content of printed handoff documents. Standardizing printed handoff documents has the potential to decrease omissions of key data during patient care transitions, which may decrease the risk of downstream medical errors. © 2015 Society of Hospital Medicine.

  8. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  9. Understanding reliability and some limitations of the images and spectra reconstructed from a multi-monochromatic x-ray imager

    DOE PAGES

    Nagayama, T.; Mancini, R. C.; Mayes, D.; ...

    2015-11-18

    Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. In this paper, we synthetically quantify the accuracymore » of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ~6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ~10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. Finally, it is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less

  10. Effect of Processing Conditions on the Anelastic Behavior of Plasma Sprayed Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaishak

    2011-12-01

    Plasma sprayed ceramic materials contain an assortment of micro-structural defects, including pores, cracks, and interfaces arising from the droplet based assemblage of the spray deposition technique. The defective architecture of the deposits introduces a novel "anelastic" response in the coatings comprising of their non-linear and hysteretic stress-strain relationship under mechanical loading. It has been established that this anelasticity can be attributed to the relative movement of the embedded defects under varying stresses. While the non-linear response of the coatings arises from the opening/closure of defects, hysteresis is produced by the frictional sliding among defect surfaces. Recent studies have indicated that anelastic behavior of coatings can be a unique descriptor of their mechanical behavior and related to the defect configuration. In this dissertation, a multi-variable study employing systematic processing strategies was conducted to augment the understanding on various aspects of the reported anelastic behavior. A bi-layer curvature measurement technique was adapted to measure the anelastic properties of plasma sprayed ceramic. The quantification of anelastic parameters was done using a non-linear model proposed by Nakamura et.al. An error analysis was conducted on the technique to know the available margins for both experimental as well as computational errors. The error analysis was extended to evaluate its sensitivity towards different coating microstructure. For this purpose, three coatings with significantly different microstructures were fabricated via tuning of process parameters. Later the three coatings were also subjected to different strain ranges systematically, in order to understand the origin and evolution of anelasticity on different microstructures. The last segment of this thesis attempts to capture the intricacies on the processing front and tries to evaluate and establish a correlation between them and the anelastic parameters.

  11. Excitation of transverse dipole and quadrupole modes in a pure ion plasma in a linear Paul trap to study collective processes in intense beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.

    Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less

  12. Understanding reliability and some limitations of the images and spectra reconstructed from a multi-monochromatic x-ray imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagayama, T.; Mancini, R. C.; Mayes, D.

    2015-11-15

    Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of imagesmore » and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less

  13. Understanding reliability and some limitations of the images and spectra reconstructed from a multi-monochromatic x-ray imager.

    PubMed

    Nagayama, T; Mancini, R C; Mayes, D; Tommasini, R; Florido, R

    2015-11-01

    Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.

  14. Increasing Safety of a Robotic System for Inner Ear Surgery Using Probabilistic Error Modeling Near Vital Anatomy

    PubMed Central

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2017-01-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy. PMID:29200595

  15. Increasing safety of a robotic system for inner ear surgery using probabilistic error modeling near vital anatomy

    NASA Astrophysics Data System (ADS)

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2016-03-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.

  16. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders

    PubMed Central

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2012-01-01

    Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137

  17. Inborn Errors of Fructose Metabolism. What Can We Learn from Them?

    PubMed

    Tran, Christel

    2017-04-03

    Fructose is one of the main sweetening agents in the human diet and its ingestion is increasing globally. Dietary sugar has particular effects on those whose capacity to metabolize fructose is limited. If intolerance to carbohydrates is a frequent finding in children, inborn errors of carbohydrate metabolism are rare conditions. Three inborn errors are known in the pathway of fructose metabolism; (1) essential or benign fructosuria due to fructokinase deficiency; (2) hereditary fructose intolerance; and (3) fructose-1,6-bisphosphatase deficiency. In this review the focus is set on the description of the clinical symptoms and biochemical anomalies in the three inborn errors of metabolism. The potential toxic effects of fructose in healthy humans also are discussed. Studies conducted in patients with inborn errors of fructose metabolism helped to understand fructose metabolism and its potential toxicity in healthy human. Influence of fructose on the glycolytic pathway and on purine catabolism is the cause of hypoglycemia, lactic acidosis and hyperuricemia. The discovery that fructose-mediated generation of uric acid may have a causal role in diabetes and obesity provided new understandings into pathogenesis for these frequent diseases.

  18. Inborn Errors of Fructose Metabolism. What Can We Learn from Them?

    PubMed Central

    Tran, Christel

    2017-01-01

    Fructose is one of the main sweetening agents in the human diet and its ingestion is increasing globally. Dietary sugar has particular effects on those whose capacity to metabolize fructose is limited. If intolerance to carbohydrates is a frequent finding in children, inborn errors of carbohydrate metabolism are rare conditions. Three inborn errors are known in the pathway of fructose metabolism; (1) essential or benign fructosuria due to fructokinase deficiency; (2) hereditary fructose intolerance; and (3) fructose-1,6-bisphosphatase deficiency. In this review the focus is set on the description of the clinical symptoms and biochemical anomalies in the three inborn errors of metabolism. The potential toxic effects of fructose in healthy humans also are discussed. Studies conducted in patients with inborn errors of fructose metabolism helped to understand fructose metabolism and its potential toxicity in healthy human. Influence of fructose on the glycolytic pathway and on purine catabolism is the cause of hypoglycemia, lactic acidosis and hyperuricemia. The discovery that fructose-mediated generation of uric acid may have a causal role in diabetes and obesity provided new understandings into pathogenesis for these frequent diseases. PMID:28368361

  19. Uncertainty analysis technique for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Widmann, K.; Sorce, C.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  20. Uncertainty Analysis Technique for OMEGA Dante Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M J; Widmann, K; Sorce, C

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  1. Fault and Error Latency Under Real Workload: an Experimental Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram

    1986-01-01

    A practical methodology for the study of fault and error latency is demonstrated under a real workload. This is the first study that measures and quantifies the latency under real workload and fills a major gap in the current understanding of workload-failure relationships. The methodology is based on low level data gathered on a VAX 11/780 during the normal workload conditions of the installation. Fault occurrence is simulated on the data, and the error generation and discovery process is reconstructed to determine latency. The analysis proceeds to combine the low level activity data with high level machine performance data to yield a better understanding of the phenomena. A strong relationship exists between latency and workload and that relationship is quantified. The sampling and reconstruction techniques used are also validated. Error latency in the memory where the operating system resides was studied using data on the physical memory access. Fault latency in the paged section of memory was determined using data from physical memory scans. Error latency in the microcontrol store was studied using data on the microcode access and usage.

  2. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  3. The Language of Scholarship: How to Rapidly Locate and Avoid Common APA Errors.

    PubMed

    Freysteinson, Wyona M; Krepper, Rebecca; Mellott, Susan

    2015-10-01

    This article is relevant for nurses and nursing students who are writing scholarly documents for work, school, or publication and who have a basic understanding of American Psychological Association (APA) style. Common APA errors on the reference list and in citations within the text are reviewed. Methods to quickly find and reduce those errors are shared. Copyright 2015, SLACK Incorporated.

  4. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  5. First structure of full-length mammalian phenylalanine hydroxylase reveals the architecture of an autoinhibited tetramer

    PubMed Central

    Arturo, Emilia C.; Gupta, Kushol; Héroux, Annie; Stith, Linda; Cross, Penelope J.; Parker, Emily J.; Loll, Patrick J.; Jaffe, Eileen K.

    2016-01-01

    Improved understanding of the relationship among structure, dynamics, and function for the enzyme phenylalanine hydroxylase (PAH) can lead to needed new therapies for phenylketonuria, the most common inborn error of amino acid metabolism. PAH is a multidomain homo-multimeric protein whose conformation and multimerization properties respond to allosteric activation by the substrate phenylalanine (Phe); the allosteric regulation is necessary to maintain Phe below neurotoxic levels. A recently introduced model for allosteric regulation of PAH involves major domain motions and architecturally distinct PAH tetramers [Jaffe EK, Stith L, Lawrence SH, Andrake M, Dunbrack RL, Jr (2013) Arch Biochem Biophys 530(2):73–82]. Herein, we present, to our knowledge, the first X-ray crystal structure for a full-length mammalian (rat) PAH in an autoinhibited conformation. Chromatographic isolation of a monodisperse tetrameric PAH, in the absence of Phe, facilitated determination of the 2.9 Å crystal structure. The structure of full-length PAH supersedes a composite homology model that had been used extensively to rationalize phenylketonuria genotype–phenotype relationships. Small-angle X-ray scattering (SAXS) confirms that this tetramer, which dominates in the absence of Phe, is different from a Phe-stabilized allosterically activated PAH tetramer. The lack of structural detail for activated PAH remains a barrier to complete understanding of phenylketonuria genotype–phenotype relationships. Nevertheless, the use of SAXS and X-ray crystallography together to inspect PAH structure provides, to our knowledge, the first complete view of the enzyme in a tetrameric form that was not possible with prior partial crystal structures, and facilitates interpretation of a wealth of biochemical and structural data that was hitherto impossible to evaluate. PMID:26884182

  6. Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis

    NASA Technical Reports Server (NTRS)

    Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.

    2004-01-01

    This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.

  7. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  8. Distinguishing discrete and gradient category structure in language: Insights from verb-particle constructions.

    PubMed

    Brehm, Laurel; Goldrick, Matthew

    2017-10-01

    The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Template-based modeling and ab initio refinement of protein oligomer structures using GALAXY in CAPRI round 30.

    PubMed

    Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok

    2017-03-01

    Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  11. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  12. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  13. An empirical understanding of triple collocation evaluation measure

    NASA Astrophysics Data System (ADS)

    Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang

    2013-04-01

    Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC method could not fully function with datasets acting at very different spatial resolutions, or b) that the errors were not fully independent as initially assumed.

  14. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-09-03

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.

  15. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  16. Sim3C: simulation of Hi-C and Meta3C proximity ligation sequencing technologies.

    PubMed

    DeMaere, Matthew Z; Darling, Aaron E

    2018-02-01

    Chromosome conformation capture (3C) and Hi-C DNA sequencing methods have rapidly advanced our understanding of the spatial organization of genomes and metagenomes. Many variants of these protocols have been developed, each with their own strengths. Currently there is no systematic means for simulating sequence data from this family of sequencing protocols, potentially hindering the advancement of algorithms to exploit this new datatype. We describe a computational simulator that, given simple parameters and reference genome sequences, will simulate Hi-C sequencing on those sequences. The simulator models the basic spatial structure in genomes that is commonly observed in Hi-C and 3C datasets, including the distance-decay relationship in proximity ligation, differences in the frequency of interaction within and across chromosomes, and the structure imposed by cells. A means to model the 3D structure of randomly generated topologically associating domains is provided. The simulator considers several sources of error common to 3C and Hi-C library preparation and sequencing methods, including spurious proximity ligation events and sequencing error. We have introduced the first comprehensive simulator for 3C and Hi-C sequencing protocols. We expect the simulator to have use in testing of Hi-C data analysis algorithms, as well as more general value for experimental design, where questions such as the required depth of sequencing, enzyme choice, and other decisions can be made in advance in order to ensure adequate statistical power with respect to experimental hypothesis testing.

  17. Music and Language Syntax Interact in Broca's Area: An fMRI Study.

    PubMed

    Kunert, Richard; Willems, Roel M; Casasanto, Daniel; Patel, Aniruddh D; Hagoort, Peter

    2015-01-01

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca's area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca's area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains-music and language-might draw on the same high level syntactic integration resources in Broca's area.

  18. Music and Language Syntax Interact in Broca’s Area: An fMRI Study

    PubMed Central

    Kunert, Richard; Willems, Roel M.; Casasanto, Daniel; Patel, Aniruddh D.; Hagoort, Peter

    2015-01-01

    Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area. PMID:26536026

  19. Uncertainties in shoreline position analysis: the role of run-up and tide in a gentle slope beach

    NASA Astrophysics Data System (ADS)

    Manno, Giorgio; Lo Re, Carlo; Ciraolo, Giuseppe

    2017-09-01

    In recent decades in the Mediterranean Sea, high anthropic pressure from increasing economic and touristic development has affected several coastal areas. Today the erosion phenomena threaten human activities and existing structures, and interdisciplinary studies are needed to better understand actual coastal dynamics. Beach evolution analysis can be conducted using GIS methodologies, such as the well-known Digital Shoreline Analysis System (DSAS), in which error assessment based on shoreline positioning plays a significant role. In this study, a new approach is proposed to estimate the positioning errors due to tide and wave run-up influence. To improve the assessment of the wave run-up uncertainty, a spectral numerical model was used to propagate waves from deep to intermediate water and a Boussinesq-type model for intermediate water up to the swash zone. Tide effects on the uncertainty of shoreline position were evaluated using data collected by a nearby tide gauge. The proposed methodology was applied to an unprotected, dissipative Sicilian beach far from harbors and subjected to intense human activities over the last 20 years. The results show wave run-up and tide errors ranging from 0.12 to 4.5 m and from 1.20 to 1.39 m, respectively.

  20. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  1. Development of performance specifications for hybrid modeling of floating wind turbines in wave basin tests

    DOE PAGES

    Hall, Matthew; Goupee, Andrew; Jonkman, Jason

    2017-08-24

    Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less

  2. Development of performance specifications for hybrid modeling of floating wind turbines in wave basin tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Matthew; Goupee, Andrew; Jonkman, Jason

    Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less

  3. Against Structural Constraints in Subject-Verb Agreement Production

    ERIC Educational Resources Information Center

    Gillespie, Maureen; Pearlmutter, Neal J.

    2013-01-01

    Syntactic structure has been considered an integral component of agreement computation in language production. In agreement error studies, clause-boundedness (Bock & Cutting, 1992) and hierarchical feature-passing (Franck, Vigliocco, & Nicol, 2002) predict that local nouns within clausal modifiers should produce fewer errors than do those within…

  4. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  5. Performance of optimum detector structures for noisy intersymbol interference channels

    NASA Technical Reports Server (NTRS)

    Womer, J. D.; Fritchman, B. D.; Kanal, L. N.

    1971-01-01

    The errors which arise in transmitting digital information by radio or wireline systems because of additive noise from successively transmitted signals interfering with one another are described. The probability of error and the performance of optimum detector structures are examined. A comparative study of the performance of certain detector structures and approximations to them, and the performance of a transversal equalizer are included.

  6. Effects of syllable structure in aphasic errors: implications for a new model of speech production.

    PubMed

    Romani, Cristina; Galluzzi, Claudia; Bureca, Ivana; Olson, Andrew

    2011-03-01

    Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Reducing major rule violations in commuter rail operations : the role of distraction and attentional errors

    DOT National Transportation Integrated Search

    2012-10-22

    Recent accidents in commuter rail operations and analyses of rule violations have highlighted the need for : better understanding of the contributory role of distraction and attentional errors. Distracted driving has : thoroughly been studied in rece...

  8. Sources of Error in Substance Use Prevalence Surveys

    PubMed Central

    Johnson, Timothy P.

    2014-01-01

    Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511

  9. Descartes' embodied psychology: Descartes' or Damasio's error?

    PubMed

    Kirkebøen, G

    2001-08-01

    Damasio (1994) claims that Descartes imagined thinking as an activity separate from the body, and that the effort to understand the mind in general biological terms was retarded as a consequence of Descartes' dualism. These claims do not hold; they are "Damasio's error". Descartes never considered what we today call thinking or cognition without taking the body into account. His new dualism required an embodied understanding of cognition. The article gives an historical overview of the development of Descartes' radically new psychology from his account of algebraic reasoning in the early Regulae (1628) to his "neurobiology of rationality" in the late Passions of the soul (1649). The author argues that Descartes' dualism opens the way for mechanistic and mathematical explanations of all kinds of physiological and psychological phenomena, including the kind of phenomena Damasio discusses in Descartes' error. The models of understanding Damasio puts forward can be seen as advanced version of models which Descartes introduced in the 1640s. A far better title for his book would have been Descartes' vision.

  10. High-precision multiband spectroscopy of ultracold fermions in a nonseparable optical lattice

    NASA Astrophysics Data System (ADS)

    Fläschner, Nick; Tarnowski, Matthias; Rem, Benno S.; Vogel, Dominik; Sengstock, Klaus; Weitenberg, Christof

    2018-05-01

    Spectroscopic tools are fundamental for the understanding of complex quantum systems. Here, we demonstrate high-precision multiband spectroscopy in a graphenelike lattice using ultracold fermionic atoms. From the measured band structure, we characterize the underlying lattice potential with a relative error of 1.2 ×10-3 . Such a precise characterization of complex lattice potentials is an important step towards precision measurements of quantum many-body systems. Furthermore, we explain the excitation strengths into different bands with a model and experimentally study their dependency on the symmetry of the perturbation operator. This insight suggests the excitation strengths as a suitable observable for interaction effects on the eigenstates.

  11. Factors Associated With Barcode Medication Administration Technology That Contribute to Patient Safety: An Integrative Review.

    PubMed

    Strudwick, Gillian; Reisdorfer, Emilene; Warnock, Caroline; Kalia, Kamini; Sulkers, Heather; Clark, Carrie; Booth, Richard

    In an effort to prevent medication errors, barcode medication administration technology has been implemented in many health care organizations. An integrative review was conducted to understand the effect of barcode medication administration technology on medication errors, and characteristics of use demonstrated by nurses contribute to medication safety. Addressing poor system use may support improved patient safety through the reduction of medication administration errors.

  12. Wind Power Forecasting Error Distributions: An International Comparison; Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, B. M.; Lew, D.; Milligan, M.

    2012-09-01

    Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.

  13. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  14. Multiplicity Control in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  15. Errors of Inference in Structural Equation Modeling

    ERIC Educational Resources Information Center

    McCoach, D. Betsy; Black, Anne C.; O'Connell, Ann A.

    2007-01-01

    Although structural equation modeling (SEM) is one of the most comprehensive and flexible approaches to data analysis currently available, it is nonetheless prone to researcher misuse and misconceptions. This article offers a brief overview of the unique capabilities of SEM and discusses common sources of user error in drawing conclusions from…

  16. The error structure of the SMAP single and dual channel soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    Knowledge of the temporal error structure for remotely-sensed surface soil moisture retrievals can improve our ability to exploit them for hydrology and climate studies. This study employs a triple collocation type analysis to investigate both the total variance and temporal auto-correlation of erro...

  17. Quantifying Adventitious Error in a Covariance Structure as a Random Effect

    PubMed Central

    Wu, Hao; Browne, Michael W.

    2017-01-01

    We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463

  18. Achievable flatness in a large microwave power transmitting antenna

    NASA Technical Reports Server (NTRS)

    Ried, R. C.

    1980-01-01

    A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.

  19. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  20. Fatigue proofing: The role of protective behaviours in mediating fatigue-related risk in a defence aviation environment.

    PubMed

    Dawson, Drew; Cleggett, Courtney; Thompson, Kirrilly; Thomas, Matthew J W

    2017-02-01

    In the military or emergency services, operational requirements and/or community expectations often preclude formal prescriptive working time arrangements as a practical means of reducing fatigue-related risk. In these environments, workers sometimes employ adaptive or protective behaviours informally to reduce the risk (i.e. likelihood or consequence) associated with a fatigue-related error. These informal behaviours enable employees to reduce risk while continuing to work while fatigued. In this study, we documented the use of informal protective behaviours in a group of defence aviation personnel including flight crews. Semi-structured interviews were conducted to determine whether and which protective behaviours were used to mitigate fatigue-related error. The 18 participants were from aviation-specific trades and included aircrew (pilots and air-crewman) and aviation maintenance personnel (aeronautical engineers and maintenance personnel). Participants identified 147 ways in which they and/or others act to reduce the likelihood or consequence of a fatigue-related error. These formed seven categories of fatigue-reduction strategies. The two most novel categories are discussed in this paper: task-related and behaviour-based strategies. Broadly speaking, these results indicate that fatigued military flight and maintenance crews use protective 'fatigue-proofing' behaviours to reduce the likelihood and/or consequence of fatigue-related error and were aware of the potential benefits. It is also important to note that these behaviours are not typically part of the formal safety management system. Rather, they have evolved spontaneously as part of the culture around protecting team performance under adverse operating conditions. When compared with previous similar studies, aviation personnel were more readily able to understand the idea of fatigue proofing than those from a fire-fighting background. These differences were thought to reflect different cultural attitudes toward error and formal training using principles of Crew Resource Management and Threat and Error Management. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. The Mathematics of Computer Error.

    ERIC Educational Resources Information Center

    Wood, Eric

    1988-01-01

    Why a computer error occurred is considered by analyzing the binary system and decimal fractions. How the computer stores numbers is then described. Knowledge of the mathematics behind computer operation is important if one wishes to understand and have confidence in the results of computer calculations. (MNS)

  2. An empirical assessment of exposure measurement errors and effect attenuation in bi-pollutant epidemiologic models

    EPA Science Inventory

    Using multipollutant models to understand the combined health effects of exposure to multiple pollutants is becoming more common. However, the complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from ...

  3. An empirical assessment of exposure measurement error and effect attenuation in bi-pollutant epidemiologic models

    EPA Science Inventory

    Background: Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates f...

  4. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  5. Creating a Test Validated Structural Dynamic Finite Element Model of the Multi-Utility Technology Test Bed Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truong, Samson S.

    2014-01-01

    Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of Multi Utility Technology Test Bed, X-56A, aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of X-56A. The ground vibration test validated structural dynamic finite element model of the X-56A is created in this study. The structural dynamic finite element model of the X-56A is improved using a model tuning tool. In this study, two different weight configurations of the X-56A have been improved in a single optimization run.

  6. Considerations in the design of large space structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.; Macneal, R. H.; Knapp, K.; Macgillivray, C. S.

    1981-01-01

    Several analytical studies of topics relevant to the design of large space structures are presented. Topics covered are: the types and quantitative evaluation of the disturbances to which large Earth-oriented microwave reflectors would be subjected and the resulting attitude errors of such spacecraft; the influence of errors in the structural geometry of the performance of radiofrequency antennas; the effect of creasing on the flatness of tensioned reflector membrane surface; and an analysis of the statistics of damage to truss-type structures due to meteoroids.

  7. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  8. Statistics is not enough: revisiting Ronald A. Fisher's critique (1936) of Mendel's experimental results (1866).

    PubMed

    Pilpel, Avital

    2007-09-01

    This paper is concerned with the role of rational belief change theory in the philosophical understanding of experimental error. Today, philosophers seek insight about error in the investigation of specific experiments, rather than in general theories. Nevertheless, rational belief change theory adds to our understanding of just such cases: R. A. Fisher's criticism of Mendel's experiments being a case in point. After an historical introduction, the main part of this paper investigates Fisher's paper from the point of view of rational belief change theory: what changes of belief about Mendel's experiment does Fisher go through and with what justification. It leads to surprising insights about what Fisher had done right and wrong, and, more generally, about the limits of statistical methods in detecting error.

  9. A new method for the assessment of patient safety competencies during a medical school clerkship using an objective structured clinical examination

    PubMed Central

    Daud-Gallotti, Renata Mahfuz; Morinaga, Christian Valle; Arlindo-Rodrigues, Marcelo; Velasco, Irineu Tadeu; Arruda Martins, Milton; Tiberio, Iolanda Calvo

    2011-01-01

    INTRODUCTION: Patient safety is seldom assessed using objective evaluations during undergraduate medical education. OBJECTIVE: To evaluate the performance of fifth-year medical students using an objective structured clinical examination focused on patient safety after implementation of an interactive program based on adverse events recognition and disclosure. METHODS: In 2007, a patient safety program was implemented in the internal medicine clerkship of our hospital. The program focused on human error theory, epidemiology of incidents, adverse events, and disclosure. Upon completion of the program, students completed an objective structured clinical examination with five stations and standardized patients. One station focused on patient safety issues, including medical error recognition/disclosure, the patient-physician relationship and humanism issues. A standardized checklist was completed by each standardized patient to assess the performance of each student. The student's global performance at each station and performance in the domains of medical error, the patient-physician relationship and humanism were determined. The correlations between the student performances in these three domains were calculated. RESULTS: A total of 95 students participated in the objective structured clinical examination. The mean global score at the patient safety station was 87.59±1.24 points. Students' performance in the medical error domain was significantly lower than their performance on patient-physician relationship and humanistic issues. Less than 60% of students (n = 54) offered the simulated patient an apology after a medical error occurred. A significant correlation was found between scores obtained in the medical error domains and scores related to both the patient-physician relationship and humanistic domains. CONCLUSIONS: An objective structured clinical examination is a useful tool to evaluate patient safety competencies during the medical student clerkship. PMID:21876976

  10. Dealing with AFLP genotyping errors to reveal genetic structure in Plukenetia volubilis (Euphorbiaceae) in the Peruvian Amazon

    PubMed Central

    Vašek, Jakub; Viehmannová, Iva; Ocelák, Martin; Cachique Huansi, Danter; Vejl, Pavel

    2017-01-01

    An analysis of the population structure and genetic diversity for any organism often depends on one or more molecular marker techniques. Nonetheless, these techniques are not absolutely reliable because of various sources of errors arising during the genotyping process. Thus, a complex analysis of genotyping error was carried out with the AFLP method in 169 samples of the oil seed plant Plukenetia volubilis L. from small isolated subpopulations in the Peruvian Amazon. Samples were collected in nine localities from the region of San Martin. Analysis was done in eight datasets with a genotyping error from 0 to 5%. Using eleven primer combinations, 102 to 275 markers were obtained according to the dataset. It was found that it is only possible to obtain the most reliable and robust results through a multiple-level filtering process. Genotyping error and software set up influence both the estimation of population structure and genetic diversity, where in our case population number (K) varied between 2–9 depending on the dataset and statistical method used. Surprisingly, discrepancies in K number were caused more by statistical approaches than by genotyping errors themselves. However, for estimation of genetic diversity, the degree of genotyping error was critical because descriptive parameters (He, FST, PLP 5%) varied substantially (by at least 25%). Due to low gene flow, P. volubilis mostly consists of small isolated subpopulations (ΦPT = 0.252–0.323) with some degree of admixture given by socio-economic connectivity among the sites; a direct link between the genetic and geographic distances was not confirmed. The study illustrates the successful application of AFLP to infer genetic structure in non-model plants. PMID:28910307

  11. Symmetric and Asymmetric Patterns of Attraction Errors in Producing Subject-Predicate Agreement in Hebrew: An Issue of Morphological Structure

    ERIC Educational Resources Information Center

    Deutsch, Avital; Dank, Maya

    2011-01-01

    A common characteristic of subject-predicate agreement errors (usually termed attraction errors) in complex noun phrases is an asymmetrical pattern of error distribution, depending on the inflectional state of the nouns comprising the complex noun phrase. That is, attraction is most likely to occur when the head noun is the morphologically…

  12. Some Deep Structure Manifestations in Second Language Errors of English Voiced and Voiceless "th."

    ERIC Educational Resources Information Center

    Moustafa, Margaret Heiss

    Native speakers of Egyptian Arabic make errors in their pronunciation of English that cannot always be accounted for by a contrastive analysis of Egyptian analysis of Egyptain Arabic and English. This study focuses on three types of errors in the pronunciation of voiced and voiceless "th" made by fluent speakers of English. These errors were noted…

  13. Bayesian operational modal analysis of Jiangyin Yangtze River Bridge

    NASA Astrophysics Data System (ADS)

    Brownjohn, James Mark William; Au, Siu-Kui; Zhu, Yichen; Sun, Zhen; Li, Binbin; Bassitt, James; Hudson, Emma; Sun, Hongbin

    2018-09-01

    Vibration testing of long span bridges is becoming a commissioning requirement, yet such exercises represent the extreme of experimental capability, with challenges for instrumentation (due to frequency range, resolution and km-order separation of sensor) and system identification (because of the extreme low frequencies). The challenge with instrumentation for modal analysis is managing synchronous data acquisition from sensors distributed widely apart inside and outside the structure. The ideal solution is precisely synchronised autonomous recorders that do not need cables, GPS or wireless communication. The challenge with system identification is to maximise the reliability of modal parameters through experimental design and subsequently to identify the parameters in terms of mean values and standard errors. The challenge is particularly severe for modes with low frequency and damping typical of long span bridges. One solution is to apply 'third generation' operational modal analysis procedures using Bayesian approaches in both the planning and analysis stages. The paper presents an exercise on the Jiangyin Yangtze River Bridge, a suspension bridge with a 1385 m main span. The exercise comprised planning of a test campaign to optimise the reliability of operational modal analysis, the deployment of a set of independent data acquisition units synchronised using precision oven controlled crystal oscillators and the subsequent identification of a set of modal parameters in terms of mean and variance errors. Although the bridge has had structural health monitoring technology installed since it was completed, this was the first full modal survey, aimed at identifying important features of the modal behaviour rather than providing fine resolution of mode shapes through the whole structure. Therefore, measurements were made in only the (south) tower, while torsional behaviour was identified by a single measurement using a pair of recorders across the carriageway. The modal survey revealed a first lateral symmetric mode with natural frequency 0.0536 Hz with standard error ±3.6% and damping ratio 4.4% with standard error ±88%. First vertical mode is antisymmetric with frequency 0.11 Hz ± 1.2% and damping ratio 4.9% ± 41%. A significant and novel element of the exercise was planning of the measurement setups and their necessary duration linked to prior estimation of the precision of the frequency and damping estimates. The second novelty is the use of the multi-sensor precision synchronised acquisition without external time reference on a structure of this scale. The challenges of ambient vibration testing and modal identification in a complex environment are addressed leveraging on advances in practical implementation and scientific understanding of the problem.

  14. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  15. Towards Automated Structure-Based NMR Resonance Assignment

    NASA Astrophysics Data System (ADS)

    Jang, Richard; Gao, Xin; Li, Ming

    We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.

  16. Limited Documentation and Treatment Quality of Glycemic Inpatient Care in Relation to Structural Deficits of Heterogeneous Insulin Charts at a Large University Hospital.

    PubMed

    Kopanz, Julia; Lichtenegger, Katharina M; Sendlhofer, Gerald; Semlitsch, Barbara; Cuder, Gerald; Pak, Andreas; Pieber, Thomas R; Tax, Christa; Brunner, Gernot; Plank, Johannes

    2018-02-09

    Insulin charts represent a key component in the inpatient glycemic management process. The aim was to evaluate the quality of structure, documentation, and treatment of diabetic inpatient care to design a new standardized insulin chart for a large university hospital setting. Historically grown blank insulin charts in use at 39 general wards were collected and evaluated for quality structure features. Documentation and treatment quality were evaluated in a consecutive snapshot audit of filled-in charts. The primary end point was the percentage of charts with any medication error. Overall, 20 different blank insulin charts with variable designs and significant structural deficits were identified. A medication error occurred in 55% of the 102 audited filled-in insulin charts, consisting of prescription and management errors in 48% and 16%, respectively. Charts of insulin-treated patients had more medication errors relative to patients treated with oral medication (P < 0.01). Chart design did support neither clinical authorization of individual insulin prescription (10%), nor insulin administration confirmed by nurses' signature (25%), nor treatment of hypoglycemia (0%), which resulted in a reduced documentation and treatment quality in clinical practice 7%, 30%, 25%, respectively. A multitude of charts with variable design characteristics and structural deficits were in use across the inpatient wards. More than half of the inpatients had a chart displaying a medication error. Lack of structure quality features of the charts had an impact on documentation and treatment quality. Based on identified deficits and international standards, a new insulin chart was developed to overcome these quality hurdles.

  17. BAYESIAN PROTEIN STRUCTURE ALIGNMENT.

    PubMed

    Rodriguez, Abel; Schmidler, Scott C

    The analysis of the three-dimensional structure of proteins is an important topic in molecular biochemistry. Structure plays a critical role in defining the function of proteins and is more strongly conserved than amino acid sequence over evolutionary timescales. A key challenge is the identification and evaluation of structural similarity between proteins; such analysis can aid in understanding the role of newly discovered proteins and help elucidate evolutionary relationships between organisms. Computational biologists have developed many clever algorithmic techniques for comparing protein structures, however, all are based on heuristic optimization criteria, making statistical interpretation somewhat difficult. Here we present a fully probabilistic framework for pairwise structural alignment of proteins. Our approach has several advantages, including the ability to capture alignment uncertainty and to estimate key "gap" parameters which critically affect the quality of the alignment. We show that several existing alignment methods arise as maximum a posteriori estimates under specific choices of prior distributions and error models. Our probabilistic framework is also easily extended to incorporate additional information, which we demonstrate by including primary sequence information to generate simultaneous sequence-structure alignments that can resolve ambiguities obtained using structure alone. This combined model also provides a natural approach for the difficult task of estimating evolutionary distance based on structural alignments. The model is illustrated by comparison with well-established methods on several challenging protein alignment examples.

  18. TU-H-CAMPUS-JeP1-05: Dose Deformation Error Associated with Deformable Image Registration Pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Surucu, M; Woerner, A; Roeske, J

    Purpose: To evaluate errors associated with using different deformable image registration (DIR) pathways to deform dose from planning CT (pCT) to cone-beam CT (CBCT). Methods: Deforming dose is controversial because of the lack of quality assurance tools. We previously proposed a novel metric to evaluate dose deformation error (DDE) by warping dose information using two methods, via dose and contour deformation. First, isodose lines of the pCT were converted into structures and then deformed to the CBCT using an image based deformation map (dose/structure/deform). Alternatively, the dose matrix from the pCT was deformed to CBCT using the same deformation map,more » and then the same isodose lines of the deformed dose were converted into structures (dose/deform/structure). The doses corresponding to each structure were queried from the deformed dose and full-width-half-maximums were used to evaluate the dose dispersion. The difference between the FWHM of each isodose level structure is defined as the DDE. Three head-and-neck cancer patients were identified. For each patient, two DIRs were performed between the pCT and CBCT, either deforming pCT-to-CBCT or CBCT-to-pCT. We evaluated the errors associated by using either of these pathways to deform dose. A commercially available, Demons based DIR was used for this study, and 10 isodose levels (20% to 105%) were used to evaluate the errors in various dose levels. Results: The prescription dose for all patients was 70 Gy. The mean DDE for CT-to-CBCT deformation was 1.0 Gy (range: 0.3–2.0 Gy) and this was increased to 4.3 Gy (range: 1.5–6.4 Gy) for CBCT-to-CT deformation. The mean increase in DDE between the two deformations was 3.3 Gy (range: 1.0–5.4 Gy). Conclusion: The proposed DDF was used to quantitatively estimate dose deformation errors caused by different pathways to perform DIR. Deforming dose using CBCT-to-CT deformation produced greater error than CT-to-CBCT deformation.« less

  19. Weak conservation of structural features in the interfaces of homologous transient protein–protein complexes

    PubMed Central

    Sudha, Govindarajan; Singh, Prashant; Swapna, Lakshmipuram S; Srinivasan, Narayanaswamy

    2015-01-01

    Residue types at the interface of protein–protein complexes (PPCs) are known to be reasonably well conserved. However, we show, using a dataset of known 3-D structures of homologous transient PPCs, that the 3-D location of interfacial residues and their interaction patterns are only moderately and poorly conserved, respectively. Another surprising observation is that a residue at the interface that is conserved is not necessarily in the interface in the homolog. Such differences in homologous complexes are manifested by substitution of the residues that are spatially proximal to the conserved residue and structural differences at the interfaces as well as differences in spatial orientations of the interacting proteins. Conservation of interface location and the interaction pattern at the core of the interfaces is higher than at the periphery of the interface patch. Extents of variability of various structural features reported here for homologous transient PPCs are higher than the variation in homologous permanent homomers. Our findings suggest that straightforward extrapolation of interfacial nature and inter-residue interaction patterns from template to target could lead to serious errors in the modeled complex structure. Understanding the evolution of interfaces provides insights to improve comparative modeling of PPC structures. PMID:26311309

  20. The application of 3D Zernike moments for the description of "model-free" molecular structure, functional motion, and structural reliability.

    PubMed

    Grandison, Scott; Roberts, Carl; Morris, Richard J

    2009-03-01

    Protein structures are not static entities consisting of equally well-determined atomic coordinates. Proteins undergo continuous motion, and as catalytic machines, these movements can be of high relevance for understanding function. In addition to this strong biological motivation for considering shape changes is the necessity to correctly capture different levels of detail and error in protein structures. Some parts of a structural model are often poorly defined, and the atomic displacement parameters provide an excellent means to characterize the confidence in an atom's spatial coordinates. A mathematical framework for studying these shape changes, and handling positional variance is therefore of high importance. We present an approach for capturing various protein structure properties in a concise mathematical framework that allows us to compare features in a highly efficient manner. We demonstrate how three-dimensional Zernike moments can be employed to describe functions, not only on the surface of a protein but throughout the entire molecule. A number of proof-of-principle examples are given which demonstrate how this approach may be used in practice for the representation of movement and uncertainty.

  1. A novel fast and flexible technique of radical kinetic behaviour investigation based on pallet for plasma evaluation structure and numerical analysis

    NASA Astrophysics Data System (ADS)

    Malinowski, Arkadiusz; Takeuchi, Takuya; Chen, Shang; Suzuki, Toshiya; Ishikawa, Kenji; Sekine, Makoto; Hori, Masaru; Lukasiak, Lidia; Jakubowski, Andrzej

    2013-07-01

    This paper describes a new, fast, and case-independent technique for sticking coefficient (SC) estimation based on pallet for plasma evaluation (PAPE) structure and numerical analysis. Our approach does not require complicated structure, apparatus, or time-consuming measurements but offers high reliability of data and high flexibility. Thermal analysis is also possible. This technique has been successfully applied to estimation of very low value of SC of hydrogen radicals on chemically amplified ArF 193 nm photoresist (the main goal of this study). Upper bound of our technique has been determined by investigation of SC of fluorine radical on polysilicon (in elevated temperature). Sources of estimation error and ways of its reduction have been also discussed. Results of this study give an insight into the process kinetics, and not only they are helpful in better process understanding but additionally they may serve as parameters in a phenomenological model development for predictive modelling of etching for ultimate CMOS topography simulation.

  2. Inner structural vibration isolation method for a single control moment gyroscope

    NASA Astrophysics Data System (ADS)

    Zhang, Jingrui; Guo, Zixi; Zhang, Yao; Tang, Liang; Guan, Xin

    2016-01-01

    Assembling and manufacturing errors of control moment gyros (CMG) often generate high frequency vibrations which are detrimental to spacecrafts with high precision pointing requirement. In this paper, some design methods of vibration isolation between CMG and spacecraft is dealt with. As a first step, the dynamic model of the CMG with and without supporting isolation structures is studied and analyzed. Subsequently, the frequency domain analysis of CMG with isolation system is performed and the effectiveness of the designed system is ascertained. Based on the above studies, an adaptive design suitable with appropriate design parameters are carried out. A numerical analysis is also performed to understand the effectiveness of the system and the comparison made. The simulation results clearly indicate that when the ideal isolation structure was implemented in the spacecraft, the vibrations generated by the rotor were found to be greatly reduced, while the capacity of the output torque was not lost, which means that the isolation system will not affect the performance of attitude control.

  3. A Contribution to the Understanding of the Regional Seismic Structure in the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Di Luccio, F.; Thio, H.; Pino, N.

    2001-12-01

    Regional earthquakes recorded by two digital broadband stations (BGIO and KEG) located in the Eastern Mediterranean have been analyzed in order to study the seismic structure in this region. The area consists of different tectonic provinces, which complicate the modeling of the seismic wave propagation. We have modeled the Pnl arrivals using the FK-integration technique (Saikia, 1994) along different paths at the two stations, at several distances, ranging from 400 to 1500 km. Comparing the synthetics obtained by using several models compiled by other authors, we have constructed a velocity model, considering the informations deriving from group velocity distribution, in order to determine the finer structure in the analyzed paths. The model has been perturbed by trial and error until a compressional velocity profile has been found producing the shape of the observed waveforms. The crustal thickness, upper mantle P-wave velocity and 410-km discontinuity determine the shape of the observed waveform portions.

  4. Teaching Statistics Online Using "Excel"

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  5. An intersecting chord method for minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Zhang, Qing; Liang, Lin; Liu, Dan

    2015-11-01

    As one of the Geometrical Product Specifications that are widely applied in industrial manufacturing and measurement, sphericity error can synthetically scale a 3D structure and reflects the machining quality of a spherical workpiece. Following increasing demands in the high motion performance of spherical parts, sphericity error is becoming an indispensable component in the evaluation of form error. However, the evaluation of sphericity error is still considered to be a complex mathematical issue, and the related research studies on the development of available models are lacking. In this paper, an intersecting chord method is first proposed to solve the minimum circumscribed sphere and maximum inscribed sphere evaluations of sphericity error. This new modelling method leverages chord relationships to replace the characteristic points, thereby significantly reducing the computational complexity and improving the computational efficiency. Using the intersecting chords to generate a virtual centre, the reference sphere in two concentric spheres is simplified as a space intersecting structure. The position of the virtual centre on the space intersecting structure is determined by characteristic chords, which may reduce the deviation between the virtual centre and the centre of the reference sphere. In addition,two experiments are used to verify the effectiveness of the proposed method with real datasets from the Cartesian coordinates. The results indicate that the estimated errors are in perfect agreement with those of the published methods. Meanwhile, the computational efficiency is improved. For the evaluation of the sphericity error, the use of high performance computing is a remarkable change.

  6. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    PubMed

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ting

    Over the last two decades, our understanding of the Milky Way has been improved thanks to large data sets arising from large-area digital sky surveys. The stellar halo is now known to be inhabited by a variety of spatial and kinematic stellar substructures, including stellar streams and stellar clouds, all of which are predicted by hierarchical Lambda Cold Dark Matter models of galaxy formation. In this dissertation, we first present the analysis of spectroscopic observations of individual stars from the two candidate structures discovered using an M-giant catalog from the Two Micron All-Sky Survey. The follow-up observations show that onemore » of the candidates is a genuine structure which might be associated with the Galactic Anticenter Stellar Structure, while the other one is a false detection due to the systematic photometric errors in the survey or dust extinction in low Galactic latitudes. We then presented the discovery of an excess of main sequence turn-off stars in the direction of the constellations of Eridanus and Phoenix from the first-year data of the Dark Energy Survey (DES) – a five-year, 5,000 deg2 optical imaging survey in the Southern Hemisphere. The Eridanus-Phoenix (EriPhe) overdensity is centered around l ~ 285° and b ~ -60° and the Poisson significance of the detection is at least 9σ. The EriPhe overdensity has a cloud-like morphology and the extent is at least ~ 4 kpc by ~ 3 kpc in projection, with a heliocentric distance of about d ~ 16 kpc. The EriPhe overdensity is morphologically similar to the previously-discovered Virgo overdensity and Hercules-Aquila cloud. These three overdensities lie along a polar plane separated by ~ 120° and may share a common origin. In addition to the scientific discoveries, we also present the work to improve the photometric calibration in DES using auxiliary calibration systems, since the photometric errors can cause false detection in first the halo substructure. We present a detailed description of the two auxiliary calibration systems built at Texas A&M University. We then discuss how the auxiliary systems in DES can be used to improve the photometric calibration of the systematic chromatic errors – source color-dependent systematic errors that are caused by variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput.« less

  8. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  9. Kinetics, Structure, and Mechanism of 8-Oxo-7,8-dihydro-2′-deoxyguanosine Bypass by Human DNA Polymerase η*♦

    PubMed Central

    Patra, Amritraj; Nagy, Leslie D.; Zhang, Qianqian; Su, Yan; Müller, Livia; Guengerich, F. Peter; Egli, Martin

    2014-01-01

    DNA damage incurred by a multitude of endogenous and exogenous factors constitutes an inevitable challenge for the replication machinery. Cells rely on various mechanisms to either remove lesions or bypass them in a more or less error-prone fashion. The latter pathway involves the Y-family polymerases that catalyze trans-lesion synthesis across sites of damaged DNA. 7,8-Dihydro-8-oxo-2′-deoxyguanosine (8-oxoG) is a major lesion that is a consequence of oxidative stress and is associated with cancer, aging, hepatitis, and infertility. We have used steady-state and transient-state kinetics in conjunction with mass spectrometry to analyze in vitro bypass of 8-oxoG by human DNA polymerase η (hpol η). Unlike the high fidelity polymerases that show preferential insertion of A opposite 8-oxoG, hpol η is capable of bypassing 8-oxoG in a mostly error-free fashion, thus preventing GC→AT transversion mutations. Crystal structures of ternary hpol η-DNA complexes and incoming dCTP, dATP, or dGTP opposite 8-oxoG reveal that an arginine from the finger domain assumes a key role in avoiding formation of the nascent 8-oxoG:A pair. That hpol η discriminates against dATP exclusively at the insertion stage is confirmed by structures of ternary complexes that allow visualization of the extension step. These structures with G:dCTP following either 8-oxoG:C or 8-oxoG:A pairs exhibit virtually identical active site conformations. Our combined data provide a detailed understanding of hpol η bypass of the most common oxidative DNA lesion. PMID:24759104

  10. Kinetics, structure, and mechanism of 8-Oxo-7,8-dihydro-2'-deoxyguanosine bypass by human DNA polymerase η.

    PubMed

    Patra, Amritraj; Nagy, Leslie D; Zhang, Qianqian; Su, Yan; Müller, Livia; Guengerich, F Peter; Egli, Martin

    2014-06-13

    DNA damage incurred by a multitude of endogenous and exogenous factors constitutes an inevitable challenge for the replication machinery. Cells rely on various mechanisms to either remove lesions or bypass them in a more or less error-prone fashion. The latter pathway involves the Y-family polymerases that catalyze trans-lesion synthesis across sites of damaged DNA. 7,8-Dihydro-8-oxo-2'-deoxyguanosine (8-oxoG) is a major lesion that is a consequence of oxidative stress and is associated with cancer, aging, hepatitis, and infertility. We have used steady-state and transient-state kinetics in conjunction with mass spectrometry to analyze in vitro bypass of 8-oxoG by human DNA polymerase η (hpol η). Unlike the high fidelity polymerases that show preferential insertion of A opposite 8-oxoG, hpol η is capable of bypassing 8-oxoG in a mostly error-free fashion, thus preventing GC→AT transversion mutations. Crystal structures of ternary hpol η-DNA complexes and incoming dCTP, dATP, or dGTP opposite 8-oxoG reveal that an arginine from the finger domain assumes a key role in avoiding formation of the nascent 8-oxoG:A pair. That hpol η discriminates against dATP exclusively at the insertion stage is confirmed by structures of ternary complexes that allow visualization of the extension step. These structures with G:dCTP following either 8-oxoG:C or 8-oxoG:A pairs exhibit virtually identical active site conformations. Our combined data provide a detailed understanding of hpol η bypass of the most common oxidative DNA lesion. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. Identifying types and causes of errors in mortality data in a clinical registry using multiple information systems.

    PubMed

    Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette

    2012-01-01

    Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.

  12. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  13. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  14. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  15. Mistake proofing: changing designs to reduce error

    PubMed Central

    Grout, J R

    2006-01-01

    Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609

  16. Modeling Mediterranean forest structure using airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Bottalico, Francesca; Chirici, Gherardo; Giannini, Raffaello; Mele, Salvatore; Mura, Matteo; Puxeddu, Michele; McRoberts, Ronald E.; Valbuena, Ruben; Travaglini, Davide

    2017-05-01

    The conservation of biological diversity is recognized as a fundamental component of sustainable development, and forests contribute greatly to its preservation. Structural complexity increases the potential biological diversity of a forest by creating multiple niches that can host a wide variety of species. To facilitate greater understanding of the contributions of forest structure to forest biological diversity, we modeled relationships between 14 forest structure variables and airborne laser scanning (ALS) data for two Italian study areas representing two common Mediterranean forests, conifer plantations and coppice oaks subjected to irregular intervals of unplanned and non-standard silvicultural interventions. The objectives were twofold: (i) to compare model prediction accuracies when using two types of ALS metrics, echo-based metrics and canopy height model (CHM)-based metrics, and (ii) to construct inferences in the form of confidence intervals for large area structural complexity parameters. Our results showed that the effects of the two study areas on accuracies were greater than the effects of the two types of ALS metrics. In particular, accuracies were less for the more complex study area in terms of species composition and forest structure. However, accuracies achieved using the echo-based metrics were only slightly greater than when using the CHM-based metrics, thus demonstrating that both options yield reliable and comparable results. Accuracies were greatest for dominant height (Hd) (R2 = 0.91; RMSE% = 8.2%) and mean height weighted by basal area (R2 = 0.83; RMSE% = 10.5%) when using the echo-based metrics, 99th percentile of the echo height distribution and interquantile distance. For the forested area, the generalized regression (GREG) estimate of mean Hd was similar to the simple random sampling (SRS) estimate, 15.5 m for GREG and 16.2 m SRS. Further, the GREG estimator with standard error of 0.10 m was considerable more precise than the SRS estimator with standard error of 0.69 m.

  17. Insufficient Hartree–Fock Exchange in Hybrid DFT Functionals Produces Bent Alkynyl Radical Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyeyemi, Victor B.; Keith, John A.; Pavone, Michele

    2012-01-11

    Density functional theory (DFT) is often used to determine the electronic and geometric structures of molecules. While studying alkynyl radicals, we discovered that DFT exchange-correlation (XC) functionals containing less than ~22% Hartree–Fock (HF) exchange led to qualitatively different structures than those predicted from ab initio HF and post-HF calculations or DFT XCs containing 25% or more HF exchange. We attribute this discrepancy to rehybridization at the radical center due to electron delocalization across the triple bonds of the alkynyl groups, which itself is an artifact of self-interaction and delocalization errors. Inclusion of sufficient exact exchange reduces these errors and suppressesmore » this erroneous delocalization; we find that a threshold amount is needed for accurate structure determinations. Finally, below this threshold, significant errors in predicted alkyne thermochemistry emerge as a consequence.« less

  18. The rate of cis-trans conformation errors is increasing in low-resolution crystal structures.

    PubMed

    Croll, Tristan Ian

    2015-03-01

    Cis-peptide bonds (with the exception of X-Pro) are exceedingly rare in native protein structures, yet a check for these is not currently included in the standard workflow for some common crystallography packages nor in the automated quality checks that are applied during submission to the Protein Data Bank. This appears to be leading to a growing rate of inclusion of spurious cis-peptide bonds in low-resolution structures both in absolute terms and as a fraction of solved residues. Most concerningly, it is possible for structures to contain very large numbers (>1%) of spurious cis-peptide bonds while still achieving excellent quality reports from MolProbity, leading to concerns that ignoring such errors is allowing software to overfit maps without producing telltale errors in, for example, the Ramachandran plot.

  19. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  20. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  1. Study of correlations from Ab-Initio Simulations of Liquid Water

    NASA Astrophysics Data System (ADS)

    Soto, Adrian; Fernandez-Serra, Marivi; Lu, Deyu; Yoo, Shinjae

    An accurate understanding of the dynamics and the structure of H2O molecules in the liquid phase is of extreme importance both from a fundamental and from a practical standpoint. Despite the successes of Molecular Dynamics (MD) with Density Functional Theory (DFT), liquid water remains an extremely difficult material to simulate accurately and efficiently because of fine balance between the covalent O-H bond, the hydrogen bond and the attractive the van der Waals forces. Small errors in those produce dramatic changes in the macroscopic properties of the liquid or in its structural properties. Different density functionals produce answers that differ by as much as 35% in ambient conditions, with none producing quantitative results in agreement with experiment at different mass densities. In order to understand these differences we perform an exhaustive scanning of the geometrical coordinates of MD simulations and study their statistical correlations with the simulation output quantities using advanced correlation analyses and machine learning techniques. This work was partially supported by DOE Award No. DE-FG02-09ER16052, by DOE Early Career Award No. DE-SC0003871, by BNL LDRD 16-039 project and BNL Contract No. DE-SC0012704.

  2. Study of correlations from Ab-Initio Simulations of Liquid Water

    NASA Astrophysics Data System (ADS)

    Soto, Adrian; Fernandez-Serra, Marivi; Lu, Deyu; Yoo, Shinjae

    An accurate understanding of the dynamics and the structure of H2O molecules in the liquid phase is of extreme importance both from a fundamental and from a practical standpoint. Despite the successes of Molecular Dynamics (MD) with Density Functional Theory (DFT), liquid water remains an extremely difficult material to simulate accurately and efficiently because of fine balance between the covalent O-H bond, the hydrogen bond and the attractive the van der Waals forces. Small errors in those produce dramatic changes in the macroscopic properties of the liquid or in its structural properties. Different density functionals produce answers that differ by as much as 35% in ambient conditions, with none producing quantitative results in agreement with experiment at different mass densities [J. Chem Phys. 139, 194502(2013)]. In order to understand these differences we perform an exhaustive scanning of the geometrical coordinates of MD simulations and study their statistical correlations with the simulation output quantities using advanced correlation analyses and machine learning techniques. This work was partially supported by DOE Award No. DE-FG02-09ER16052, by DOE Early Career Award No. DE-SC0003871, by BNL LDRD 16-039 project and BNL Contract No. DE-SC0012704.

  3. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of

  4. Medium-range Performance of the Global NWP Model

    NASA Astrophysics Data System (ADS)

    Kim, J.; Jang, T.; Kim, J.; Kim, Y.

    2017-12-01

    The medium-range performance of the global numerical weather prediction (NWP) model in the Korea Meteorological Administration (KMA) is investigated. The performance is based on the prediction of the extratropical circulation. The mean square error is expressed by sum of spatial variance of discrepancy between forecasts and observations and the square of the mean error (ME). Thus, it is important to investigate the ME effect in order to understand the model performance. The ME is expressed by the subtraction of an anomaly from forecast difference against the real climatology. It is found that the global model suffers from a severe systematic ME in medium-range forecasts. The systematic ME is dominant in the entire troposphere in all months. Such ME can explain at most 25% of root mean square error. We also compare the extratropical ME distribution with that from other NWP centers. NWP models exhibit similar spatial ME structure each other. It is found that the spatial ME pattern is highly correlated to that of an anomaly, implying that the ME varies with seasons. For example, the correlation coefficient between ME and anomaly ranges from -0.51 to -0.85 by months. The pattern of the extratropical circulation also has a high correlation to an anomaly. The global model has trouble in faithfully simulating extratropical cyclones and blockings in the medium-range forecast. In particular, the model has a hard to simulate an anomalous event in medium-range forecasts. If we choose an anomalous period for a test-bed experiment, we will suffer from a large error due to an anomaly.

  5. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    PubMed

    Lowther, Andrew D; Lydersen, Christian; Fedak, Mike A; Lovell, Phil; Kovacs, Kit M

    2015-01-01

    Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs) to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE) for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.

  6. Teaching Mistakes or Teachable Moments?

    ERIC Educational Resources Information Center

    Mueller, Mary; Yankelewitz, Dina

    2014-01-01

    Gain a new perspective on the sharing of erroneous solutions in classroom discussions. Based on their research in grades four and six, the authors reveal how student-to-student correction of errors promotes mathematical reasoning and understanding. Tips for teachers include strategies for using students' errors to encourage reasoning during…

  7. Calculus Instructors' Responses to Prior Knowledge Errors

    ERIC Educational Resources Information Center

    Talley, Jana Renee

    2009-01-01

    This study investigates the responses to prior knowledge errors that Calculus I instructors make when assessing students. Prior knowledge is operationalized as any skill or understanding that a student needs to successfully navigate through a Calculus I course. A two part qualitative study consisting of student exams and instructor interviews was…

  8. Error sources in passive and active microwave satellite soil moisture over Australia

    USDA-ARS?s Scientific Manuscript database

    Development of a long-term climate record of soil moisture (SM) involves combining historic and present satellite-retrieved SM data sets. This in turn requires a consistent characterization and deep understanding of the systematic differences and errors in the individual data sets, which vary due to...

  9. Rewriting evolution--"been there, done that".

    PubMed

    Penny, David

    2013-01-01

    A recent paper by a science journalist in Nature shows major errors in understanding phylogenies, in this case of placental mammals. The underlying unrooted tree is probably correct, but the placement of the root just reflects a well-known error from the acceleration in the rate of evolution among some myomorph rodents.

  10. Observed Human Errors in Interpreting 3D visualizations: implications for Teaching Students how to Comprehend Geological Block Diagrams

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Pirl, E.; Chiang, J.; Tremaine, M.

    2009-12-01

    Block diagrams are commonly used to communicate three dimensional geological structures and other phenomena relevant to geological science (e.g., water bodies in the ocean). However, several recent studies have suggested that these 3D visualizations create difficulties for individuals with low to moderate spatial abilities. We have therefore initiated a series of studies to understand what it is about the 3D structures that make them so difficult for some people and also to determine if we can improve people’s understanding of these structures through web-based training not related to geology or other underlying information. Our first study examined what mistakes subjects made in a set of 3D block diagrams designed to represent progressively more difficult internal structures. Each block was shown bisected by a plane either perpendicular or at an angle to the block sides. Five low to medium spatial subjects were asked to draw the features that would appear on the bisecting plane. They were asked to talk aloud as they solved the problem. Each session was videotaped. Using the time it took subjects to solve the problems, the subject verbalizations of their problem solving and the drawings that were found to be in error, we have been able to find common patterns in the difficulties the subjects had with the diagrams. We have used these patterns to generate a set of strategies the subjects used in solving the problems. From these strategies, we are developing methods of teaching. A problem found in earlier work on geology structures was not observed in our study, that is, one of subjects failing to recognize the 2D representation of the block as 3D and drawing the cross-section as a combined version of the visible faces of the object. We attribute this to our experiment introduction, suggesting that even this simple training needs to be carried out with students encountering 3D block diagrams. Other problems subjects had included difficulties in perceptually recognizing variations in layer thicknesses, difficulties in recognizing an internal structure from the visible cues on the block walls, difficulties in mentally constructing objects and intersections that were not perpendicular, and difficulties in keeping track of the number of folds of a layer, and thus, the number of intersections of the layer with the bisecting plane. All of these problems suggest that web-based games giving mass practice with these variations in block diagram representations are likely to give any person appropriate skills in their interpretation. The time to complete the drawings and the errors in the drawings were also correlated with quantifiable properties of the diagrams, e.g., number of layers, number of folds in the layers, angle of bisection of the plane, etc. These will be used in further research to organize the training from easy to hard problems following what is known already about mass practice and developing abstracted skill sets. The plan is to also make the training adaptive, that is, to provide practice in those areas where an individual user is having the most problems.

  11. Classical simulation of quantum error correction in a Fibonacci anyon code

    NASA Astrophysics Data System (ADS)

    Burton, Simon; Brell, Courtney G.; Flammia, Steven T.

    2017-02-01

    Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.

  12. Selection of noisy measurement locations for error reduction in static parameter identification

    NASA Astrophysics Data System (ADS)

    Sanayei, Masoud; Onipede, Oladipo; Babu, Suresh R.

    1992-09-01

    An incomplete set of noisy static force and displacement measurements is used for parameter identification of structures at the element level. Measurement location and the level of accuracy in the measured data can drastically affect the accuracy of the identified parameters. A heuristic method is presented to select a limited number of degrees of freedom (DOF) to perform a successful parameter identification and to reduce the impact of measurement errors on the identified parameters. This pretest simulation uses an error sensitivity analysis to determine the effect of measurement errors on the parameter estimates. The selected DOF can be used for nondestructive testing and health monitoring of structures. Two numerical examples, one for a truss and one for a frame, are presented to demonstrate that using the measurements at the selected subset of DOF can limit the error in the parameter estimates.

  13. High-precision pointing with the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Poppi, Sergio; Pernechele, Claudio; Pisanu, Tonino; Morsiani, Marco

    2010-07-01

    We present here the systems aimed to measure and minimize the pointing errors for the Sardinia Radio Telescope: they consist of an optical telescope to measure errors due to the mechanical structure deformations and a lasers system for the errors due to the subreflector displacement. We show here the results of the tests that we have done on the Medicina 32 meters VLBI radio telescope. The measurements demonstrate we can measure the pointing errors of the mechanical structure, with an accuracy of about ~1 arcsec. Moreover, we show the technique to measure the displacement of the subreflector, placed in the SRT at 22 meters from the main mirror, within +/-0.1 mm from its optimal position. These measurements show that we can obtain the needed accuracy to correct also the non repeatable pointing errors, which arise on time scale varying from seconds to minutes.

  14. Reanalysis, compatibility and correlation in analysis of modified antenna structures

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1989-01-01

    A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.

  15. A system dynamics approach to analyze laboratory test errors.

    PubMed

    Guo, Shijing; Roudsari, Abdul; Garcez, Artur d'Avila

    2015-01-01

    Although many researches have been carried out to analyze laboratory test errors during the last decade, it still lacks a systemic view of study, especially to trace errors during test process and evaluate potential interventions. This study implements system dynamics modeling into laboratory errors to trace the laboratory error flows and to simulate the system behaviors while changing internal variable values. The change of the variables may reflect a change in demand or a proposed intervention. A review of literature on laboratory test errors was given and provided as the main data source for the system dynamics model. Three "what if" scenarios were selected for testing the model. System behaviors were observed and compared under different scenarios over a period of time. The results suggest system dynamics modeling has potential effectiveness of helping to understand laboratory errors, observe model behaviours, and provide a risk-free simulation experiments for possible strategies.

  16. Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs

    NASA Astrophysics Data System (ADS)

    Jayathilake, D. I.; Smith, T. J.

    2017-12-01

    Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.

  17. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  18. Blood transfusion sampling and a greater role for error recovery.

    PubMed

    Oldham, Jane

    Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.

  19. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  20. Identifiability Of Systems With Modeling Errors

    NASA Technical Reports Server (NTRS)

    Hadaegh, Yadolah " fred" ; Bekey, George A.

    1988-01-01

    Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.

  1. Using Bayesian hierarchical models to better understand nitrate sources and sinks in agricultural watersheds.

    PubMed

    Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan

    2016-11-15

    Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2  = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. 4D blood flow mapping using SPIM-microPIV in the developing zebrafish heart

    NASA Astrophysics Data System (ADS)

    Zickus, Vytautas; Taylor, Jonathan M.

    2018-02-01

    Fluid-structure interaction in the developing heart is an active area of research in developmental biology. However, investigation of heart dynamics is mostly limited to computational uid dynamics simulations using heart wall structure information only, or single plane blood ow information - so there is a need for 3D + time resolved data to fully understand cardiac function. We present an imaging platform combining selective plane illumination microscopy (SPIM) with micro particle image velocimetry (μPIV) to enable 3D-resolved flow mapping in a microscopic environment, free from many of the sources of error and bias present in traditional epi uorescence-based μPIV systems. By using our new system in conjunction with optical heart beat synchronization, we demonstrate the ability obtain non-invasive 3D + time resolved blood flow measurements in the heart of a living zebrafish embryo.

  3. Lessons learnt from Dental Patient Safety Case Reports

    PubMed Central

    Obadan, Enihomo M.; Ramoni, Rachel B.; Kalenderian, Elsbeth

    2015-01-01

    Background Errors are commonplace in dentistry, it is therefore our imperative as dental professionals to intercept them before they lead to an adverse event, and/or mitigate their effects when an adverse event occurs. This requires a systematic approach at both the profession-level, encapsulated in the Agency for Healthcare Research and Quality’s Patient Safety Initiative structure, as well as at the practice-level, where Crew Resource Management is a tested paradigm. Supporting patient safety at both the dental practice and profession levels relies on understanding the types and causes of errors, an area in which little is known. Methods A retrospective review of dental adverse events reported in the literature was performed. Electronic bibliographic databases were searched and data were extracted on background characteristics, incident description, case characteristics, clinic setting where adverse event originated, phase of patient care that adverse event was detected, proximal cause, type of patient harm, degree of harm and recovery actions. Results 182 publications (containing 270 cases) were identified through our search. Delayed and unnecessary treatment/disease progression after misdiagnosis was the largest type of harm reported. 24.4% of reviewed cases were reported to have experienced permanent harm. One of every ten case reports reviewed (11.1%) reported that the adverse event resulted in the death of the affected patient. Conclusions Published case reports provide a window into understanding the nature and extent of dental adverse events, but for as much as the findings revealed about adverse events, they also identified the need for more broad-based contributions to our collective body of knowledge about adverse events in the dental office and their causes. Practical Implications Siloed and incomplete contributions to our understanding of adverse events in the dental office are threats to dental patients’ safety. PMID:25925524

  4. Wide-field LOFAR-LBA power-spectra analyses: Impact of calibration, polarization leakage and ionosphere

    NASA Astrophysics Data System (ADS)

    Gehlot, B. K.; Koopmans, L. V. E.; de Bruyn, A. G.; Zaroubi, S.; Brentjens, M. A.; Asad, K. M. B.; Hatef, M.; Jelić, V.; Mevius, M.; Offringa, A. R.; Pandey, V. N.; Yatawatta, S.

    2018-05-01

    Contamination due to foregrounds (Galactic and Extra-galactic), calibration errors and ionospheric effects pose major challenges in detection of the cosmic 21 cm signal in various Epoch of Reionization (EoR) experiments. We present the results of a pilot study of a field centered on 3C196 using LOFAR Low Band (56-70 MHz) observations, where we quantify various wide field and calibration effects such as gain errors, polarized foregrounds, and ionospheric effects. We observe a `pitchfork' structure in the 2D power spectrum of the polarized intensity in delay-baseline space, which leaks into the modes beyond the instrumental horizon (EoR/CD window). We show that this structure largely arises due to strong instrumental polarization leakage (˜30%) towards Cas A (˜21 kJy at 81 MHz, brightest source in northern sky), which is far away from primary field of view. We measure an extremely small ionospheric diffractive scale (rdiff ≈ 430 m at 60 MHz) towards Cas A resembling pure Kolmogorov turbulence compared to rdiff ˜ 3 - 20 km towards zenith at 150 MHz for typical ionospheric conditions. This is one of the smallest diffractive scales ever measured at these frequencies. Our work provides insights in understanding the nature of aforementioned effects and mitigating them in future Cosmic Dawn observations (e.g. with SKA-low and HERA) in the same frequency window.

  5. Reduced fiber integrity and cognitive control in adolescents with internet gaming disorder.

    PubMed

    Xing, Lihong; Yuan, Kai; Bi, Yanzhi; Yin, Junsen; Cai, Chenxi; Feng, Dan; Li, Yangding; Song, Min; Wang, Hongmei; Yu, Dahua; Xue, Ting; Jin, Chenwang; Qin, Wei; Tian, Jie

    2014-10-24

    The association between the impaired cognitive control and brain regional abnormalities in Internet gaming disorder (IGD) adolescents had been validated in numerous studies. However, few studies focused on the role of the salience network (SN), which regulates dynamic communication among brain core neurocognitive networks to modulate cognitive control. Seventeen IGD adolescents and 17 healthy controls participated in the study. By combining resting-state functional connectivity and diffusion tensor imaging (DTI) tractography methods, we examined the changes of functional and structural connections within SN in IGD adolescents. The color-word Stroop task was employed to assess the impaired cognitive control in IGD adolescents. Correlation analysis was carried out to investigate the relationship between the neuroimaging indices and behavior performance in IGD adolescents. The impaired cognitive control in IGD was validated by more errors during the incongruent condition in color-word Stroop task. The right SN tract showed the decreased fractional anisotropy (FA) in IGD adolescents, though no significant differences of functional connectivity were detected. Moreover, the FA values of the right SN tract were negatively correlated with the errors during the incongruent condition in IGD adolescents. Our results revealed the disturbed structural connectivity within SN in IGD adolescents, which may be related with impaired cognitive control. It is hoped that the brain-behavior relationship from network perspective may enhance the understanding of IGD. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Why a simulation system doesn`t match the plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, R.

    1998-03-01

    Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less

  7. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  8. Voluntary Medication Error Reporting by ED Nurses: Examining the Association With Work Environment and Social Capital.

    PubMed

    Farag, Amany; Blegen, Mary; Gedney-Lose, Amalia; Lose, Daniel; Perkhounkova, Yelena

    2017-05-01

    Medication errors are one of the most frequently occurring errors in health care settings. The complexity of the ED work environment places patients at risk for medication errors. Most hospitals rely on nurses' voluntary medication error reporting, but these errors are under-reported. The purpose of this study was to examine the relationship among work environment (nurse manager leadership style and safety climate), social capital (warmth and belonging relationships and organizational trust), and nurses' willingness to report medication errors. A cross-sectional descriptive design using a questionnaire with a convenience sample of emergency nurses was used. Data were analyzed using descriptive, correlation, Mann-Whitney U, and Kruskal-Wallis statistics. A total of 71 emergency nurses were included in the study. Emergency nurses' willingness to report errors decreased as the nurses' years of experience increased (r = -0.25, P = .03). Their willingness to report errors increased when they received more feedback about errors (r = 0.25, P = .03) and when their managers used a transactional leadership style (r = 0.28, P = .01). ED nurse managers can modify their leadership style to encourage error reporting. Timely feedback after an error report is particularly important. Engaging experienced nurses to understand error root causes could increase voluntary error reporting. Published by Elsevier Inc.

  9. Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer.

    PubMed

    Setia, Kanav; Whitfield, James D

    2018-04-28

    Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of the BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of a hydrogen molecule we have compared BKSF, Jordan-Wigner, and Bravyi-Kitaev transforms under the Trotter approximation. The gate count to implement BKSF is lower than Jordan-Wigner but higher than Bravyi-Kitaev. We considered different orderings of the exponentiated terms and found lower Trotter errors than the previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to the further study of the BKSF algorithm for quantum simulation.

  10. The messages they send: e-mail use by adolescents with and without a history of specific language impairment (SLI).

    PubMed

    Conti-Ramsden, Gina; Durkin, Kevin; Walker, Allan J

    2012-01-01

    Contemporary adolescents use e-mail for a variety of purposes, including peer communication and education. Research into these uses has focused on typically developing individuals; much less is known about the use of e-mail by exceptional youth. The present study examined the structure and form of e-mail messages sent by adolescents with and without a history of specific language impairment (SLI). Thirty-eight adolescents with a history of SLI and 56 typically developing (TD) peers were assessed on measures of nonverbal abilities, core language skills and literacy skills (reading and spelling). The participants were asked to compose an e-mail reply to a standard e-mail sent by an experimenter. These reply e-mails were coded for linguistic structure, readability and spelling errors. Two adult raters, blind to the participants' language ability, judged how understandable the e-mails were, how grammatically correct the e-mails were, and also the sender's command of the English language. Adolescents with a history of SLI produced e-mails that were similar to those sent by their TD peers in terms of structure and readability. However, they made significantly more spelling errors. Furthermore, the adult raters considered the messages from participants with a history of SLI to be of poorer standard than those sent by their TD peers. The findings suggest that the e-mail messages of adolescents with a history of SLI provide indicators of the sender's language and literacy skills. Implications for intervention and technology development are discussed. © 2011 Royal College of Speech and Language Therapists.

  11. Resolution requirements for aero-optical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mani, Ali; Wang Meng; Moin, Parviz

    2008-11-10

    Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  13. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  14. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: part II--Optimization of structural sensor placement.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-04-01

    The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.

  15. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  16. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  17. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  18. Benchmarking density functionals for hydrogen-helium mixtures with quantum Monte Carlo: Energetics, pressures, and forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.

    An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less

  19. Benchmarking density functionals for hydrogen-helium mixtures with quantum Monte Carlo: Energetics, pressures, and forces

    DOE PAGES

    Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...

    2016-01-19

    An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less

  20. Human Factors Directions for Civil Aviation

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.

    2002-01-01

    Despite considerable progress in understanding human capabilities and limitations, incorporating human factors into aircraft design, operation, and certification, and the emergence of new technologies designed to reduce workload and enhance human performance in the system, most aviation accidents still involve human errors. Such errors occur as a direct or indirect result of untimely, inappropriate, or erroneous actions (or inactions) by apparently well-trained and experienced pilots, controllers, and maintainers. The field of human factors has solved many of the more tractable problems related to simple ergonomics, cockpit layout, symbology, and so on. We have learned much about the relationships between people and machines, but know less about how to form successful partnerships between humans and the information technologies that are beginning to play a central role in aviation. Significant changes envisioned in the structure of the airspace, pilots and controllers' roles and responsibilities, and air/ground technologies will require a similarly significant investment in human factors during the next few decades to ensure the effective integration of pilots, controllers, dispatchers, and maintainers into the new system. Many of the topics that will be addressed are not new because progress in crucial areas, such as eliminating human error, has been slow. A multidisciplinary approach that capitalizes upon human studies and new classes of information, computational models, intelligent analytical tools, and close collaborations with organizations that build, operate, and regulate aviation technology will ensure that the field of human factors meets the challenge.

  1. The Construction of Impossibility: A Logic-Based Analysis of Conjuring Tricks

    PubMed Central

    Smith, Wally; Dignum, Frank; Sonenberg, Liz

    2016-01-01

    Psychologists and cognitive scientists have long drawn insights and evidence from stage magic about human perceptual and attentional errors. We present a complementary analysis of conjuring tricks that seeks to understand the experience of impossibility that they produce. Our account is first motivated by insights about the constructional aspects of conjuring drawn from magicians' instructional texts. A view is then presented of the logical nature of impossibility as an unresolvable contradiction between a perception-supported belief about a situation and a memory-supported expectation. We argue that this condition of impossibility is constructed not simply through misperceptions and misattentions, but rather it is an outcome of a trick's whole structure of events. This structure is conceptualized as two parallel event sequences: an effect sequence that the spectator is intended to believe; and a method sequence that the magician understands as happening. We illustrate the value of this approach through an analysis of a simple close-up trick, Martin Gardner's Turnabout. A formalism called propositional dynamic logic is used to describe some of its logical aspects. This elucidates the nature and importance of the relationship between a trick's effect sequence and its method sequence, characterized by the careful arrangement of four evidence relationships: similarity, perceptual equivalence, structural equivalence, and congruence. The analysis further identifies two characteristics of magical apparatus that enable the construction of apparent impossibility: substitutable elements and stable occlusion. PMID:27378959

  2. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  3. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  4. Lifting primordial non-Gaussianity above the noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welling, Yvette; Woude, Drian van der; Pajer, Enrico, E-mail: welling@strw.leidenuniv.nl, E-mail: D.C.vanderWoude@uu.nl, E-mail: enrico.pajer@gmail.com

    2016-08-01

    Primordial non-Gaussianity (PNG) in Large Scale Structures is obfuscated by the many additional sources of non-linearity. Within the Effective Field Theory approach to Standard Perturbation Theory, we show that matter non-linearities in the bispectrum can be modeled sufficiently well to strengthen current bounds with near future surveys, such as Euclid. We find that the EFT corrections are crucial to this improvement in sensitivity. Yet, our understanding of non-linearities is still insufficient to reach important theoretical benchmarks for equilateral PNG, while, for local PNG, our forecast is more optimistic. We consistently account for the theoretical error intrinsic to the perturbative approachmore » and discuss the details of its implementation in Fisher forecasts.« less

  5. Cognitive aspect of diagnostic errors.

    PubMed

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  6. A classification of errors in lay comprehension of medical documents.

    PubMed

    Keselman, Alla; Smith, Catherine Arnott

    2012-12-01

    Emphasis on participatory medicine requires that patients and consumers participate in tasks traditionally reserved for healthcare providers. This includes reading and comprehending medical documents, often but not necessarily in the context of interacting with Personal Health Records (PHRs). Research suggests that while giving patients access to medical documents has many benefits (e.g., improved patient-provider communication), lay people often have difficulty understanding medical information. Informatics can address the problem by developing tools that support comprehension; this requires in-depth understanding of the nature and causes of errors that lay people make when comprehending clinical documents. The objective of this study was to develop a classification scheme of comprehension errors, based on lay individuals' retellings of two documents containing clinical text: a description of a clinical trial and a typical office visit note. While not comprehensive, the scheme can serve as a foundation of further development of a taxonomy of patients' comprehension errors. Eighty participants, all healthy volunteers, read and retold two medical documents. A data-driven content analysis procedure was used to extract and classify retelling errors. The resulting hierarchical classification scheme contains nine categories and 23 subcategories. The most common error made by the participants involved incorrectly recalling brand names of medications. Other common errors included misunderstanding clinical concepts, misreporting the objective of a clinical research study and physician's findings during a patient's visit, and confusing and misspelling clinical terms. A combination of informatics support and health education is likely to improve the accuracy of lay comprehension of medical documents. Published by Elsevier Inc.

  7. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  8. The Binding of Learning to Action in Motor Adaptation

    PubMed Central

    Gonzalez Castro, Luis Nicolas; Monsen, Craig Bryant; Smith, Maurice A.

    2011-01-01

    In motor tasks, errors between planned and actual movements generally result in adaptive changes which reduce the occurrence of similar errors in the future. It has commonly been assumed that the motor adaptation arising from an error occurring on a particular movement is specifically associated with the motion that was planned. Here we show that this is not the case. Instead, we demonstrate the binding of the adaptation arising from an error on a particular trial to the motion experienced on that same trial. The formation of this association means that future movements planned to resemble the motion experienced on a given trial benefit maximally from the adaptation arising from it. This reflects the idea that actual rather than planned motions are assigned ‘credit’ for motor errors because, in a computational sense, the maximal adaptive response would be associated with the condition credited with the error. We studied this process by examining the patterns of generalization associated with motor adaptation to novel dynamic environments during reaching arm movements in humans. We found that these patterns consistently matched those predicted by adaptation associated with the actual rather than the planned motion, with maximal generalization observed where actual motions were clustered. We followed up these findings by showing that a novel training procedure designed to leverage this newfound understanding of the binding of learning to action, can improve adaptation rates by greater than 50%. Our results provide a mechanistic framework for understanding the effects of partial assistance and error augmentation during neurologic rehabilitation, and they suggest ways to optimize their use. PMID:21731476

  9. Analysis of the “naming game” with learning errors in communications

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  10. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  11. Possibilities: A Framework for Modeling Students' Deductive Reasoning in Physics

    ERIC Educational Resources Information Center

    Gaffney, Jonathan David Housley

    2010-01-01

    Students often make errors when trying to solve qualitative or conceptual physics problems, and while many successful instructional interventions have been generated to prevent such errors, the process of deduction that students use when solving physics problems has not been thoroughly studied. In an effort to better understand that reasoning…

  12. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    ERIC Educational Resources Information Center

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  13. The Distinctions of False and Fuzzy Memories.

    ERIC Educational Resources Information Center

    Schooler, Jonathan W.

    1998-01-01

    Notes that fuzzy-trace theory has been used to understand false memories of children. Demonstrates the irony imbedded in the theory, maintaining that a central implication of fuzzy-trace theory is that some errors characterized as false memories are not really false at all. These errors, when applied to false alarms to related lures, are best…

  14. Methods as Tools: A Response to O'Keefe.

    ERIC Educational Resources Information Center

    Hewes, Dean E.

    2003-01-01

    Tries to distinguish the key insights from some distortions by clarifying the goals of experiment-wise error control that D. O'Keefe correctly identifies as vague and open to misuse. Concludes that a better understanding of the goal of experiment-wise error correction erases many of these "absurdities," but the clarifications necessary…

  15. Rewriting Evolution—“Been There, Done That”

    PubMed Central

    Penny, David

    2013-01-01

    A recent paper by a science journalist in Nature shows major errors in understanding phylogenies, in this case of placental mammals. The underlying unrooted tree is probably correct, but the placement of the root just reflects a well-known error from the acceleration in the rate of evolution among some myomorph rodents. PMID:23558594

  16. Reed-Solomon Codes and the Deep Hole Problem

    NASA Astrophysics Data System (ADS)

    Keti, Matt

    In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.

  17. ASD Is Not DLI: Individuals With Autism and Individuals With Syntactic DLI Show Similar Performance Level in Syntactic Tasks, but Different Error Patterns.

    PubMed

    Sukenik, Nufar; Friedmann, Naama

    2018-01-01

    Do individuals with autism have a developmental syntactic impairment, DLI (formerly known as SLI)? In this study we directly compared the performance of 18 individuals with Autism Spectrum Disorder (ASD) aged 9;0-18;0 years with that of 93 individuals with Syntactic-Developmental Language Impairment (SyDLI) aged 8;8-14;6 (and with 166 typically-developing children aged 5;2-18;1). We tested them using three syntactic tests assessing the comprehension and production of syntactic structures that are known to be sensitive to syntactic impairment: elicitation of subject and object relative clauses, reading and paraphrasing of object relatives, and repetition of complex syntactic structures including Wh questions, relative clauses, topicalized sentences, sentences with verb movement, sentences with A-movement, and embedded sentences. The results were consistent across the three tasks: the overall rate of correct performance on the syntactic tasks is similar for the children with ASD and those with SyDLI. However, once we look closer, they are very different. The types of errors of the ASD group differ from those of the SyDLI group-the children with ASD provide various types of pragmatically infelicitous responses that are not evinced in the SyDLI or in the age equivalent typically-developing groups. The two groups (ASD and SyDLI) also differ in the pattern of performance-the children with SyDLI show a syntactically-principled pattern of impairment, with selective difficulty in specific sentence types (such as sentences derived by movement of the object across the subject), and normal performance on other structures (such as simple sentences). In contrast, the ASD participants showed generalized low performance on the various sentence structures. Syntactic performance was far from consistent within the ASD group. Whereas all ASD participants had errors that can originate in pragmatic/discourse difficulties, seven of them had completely normal syntax in the structures we tested, and were able to produce, understand, and repeat relative clauses, Wh questions, and topicalized sentences. Only one ASD participant showed a syntactically-principled deficit similar to that of individuals with SyDLI. We conclude that not all individuals with ASD have syntactic difficulties, and that even when they fail in a syntactic task, this does not necessarily originate in a syntactic impairment. This shows that looking only at the total score in a syntactic test may be insufficient, and a fuller picture emerges once the performance on different structures and the types of erroneous responses are analyzed.

  18. ASD Is Not DLI: Individuals With Autism and Individuals With Syntactic DLI Show Similar Performance Level in Syntactic Tasks, but Different Error Patterns

    PubMed Central

    Sukenik, Nufar; Friedmann, Naama

    2018-01-01

    Do individuals with autism have a developmental syntactic impairment, DLI (formerly known as SLI)? In this study we directly compared the performance of 18 individuals with Autism Spectrum Disorder (ASD) aged 9;0–18;0 years with that of 93 individuals with Syntactic-Developmental Language Impairment (SyDLI) aged 8;8–14;6 (and with 166 typically-developing children aged 5;2–18;1). We tested them using three syntactic tests assessing the comprehension and production of syntactic structures that are known to be sensitive to syntactic impairment: elicitation of subject and object relative clauses, reading and paraphrasing of object relatives, and repetition of complex syntactic structures including Wh questions, relative clauses, topicalized sentences, sentences with verb movement, sentences with A-movement, and embedded sentences. The results were consistent across the three tasks: the overall rate of correct performance on the syntactic tasks is similar for the children with ASD and those with SyDLI. However, once we look closer, they are very different. The types of errors of the ASD group differ from those of the SyDLI group—the children with ASD provide various types of pragmatically infelicitous responses that are not evinced in the SyDLI or in the age equivalent typically-developing groups. The two groups (ASD and SyDLI) also differ in the pattern of performance—the children with SyDLI show a syntactically-principled pattern of impairment, with selective difficulty in specific sentence types (such as sentences derived by movement of the object across the subject), and normal performance on other structures (such as simple sentences). In contrast, the ASD participants showed generalized low performance on the various sentence structures. Syntactic performance was far from consistent within the ASD group. Whereas all ASD participants had errors that can originate in pragmatic/discourse difficulties, seven of them had completely normal syntax in the structures we tested, and were able to produce, understand, and repeat relative clauses, Wh questions, and topicalized sentences. Only one ASD participant showed a syntactically-principled deficit similar to that of individuals with SyDLI. We conclude that not all individuals with ASD have syntactic difficulties, and that even when they fail in a syntactic task, this does not necessarily originate in a syntactic impairment. This shows that looking only at the total score in a syntactic test may be insufficient, and a fuller picture emerges once the performance on different structures and the types of erroneous responses are analyzed. PMID:29670550

  19. Quasiparticle and hybrid density functional methods in defect studies: An application to the nitrogen vacancy in GaN

    NASA Astrophysics Data System (ADS)

    Lewis, D. K.; Matsubara, M.; Bellotti, E.; Sharifzadeh, S.

    2017-12-01

    Defects in semiconductors can play a vital role in the performance of electronic devices, with native defects often dominating the electronic properties of the semiconductor. Understanding the relationship between structural defects and electronic function will be central to the design of new high-performance materials. In particular, it is necessary to quantitatively understand the energy and lifetime of electronic states associated with the defect. Here, we apply first-principles density functional theory (DFT) and many-body perturbation theory within the GW approximation to understand the nature and energy of the defect states associated with a charged nitrogen vacancy on the electronic properties of gallium nitride (GaN), as a model of a well-studied and important wide gap semiconductor grown with defects. We systematically investigate the sources of error associated with the GW approximation and the role of the underlying atomic structure on the predicted defect state energies. Additionally, analysis of the computed electronic density of states (DOS) reveals that there is one occupied defect state 0.2 eV below the valence band maximum and three unoccupied defect states at energy of 0.2-0.4 eV above the conduction band minimum, suggesting that this defect in the +1 charge state will not behave as a carrier trap. Furthermore, we compare the character and energy of the defect state obtained from GW and DFT using the HSE approximate density functional and find excellent agreement. This systematic study provides a more complete understanding of how to obtain quantitative defect energy states in bulk semiconductors.

  20. The Effects of Tense Continuity and Subject-Verb Agreement Errors on Communication.

    ERIC Educational Resources Information Center

    Porton, Vicki M.

    This study explored the dichotomy between global errors, that is, those violating rules of overall sentence structure, and local errors, that is, those violating rules within a particular constituent of a sentence, and the relationship of these to communication breakdown. The focus was tense continuity across clauses (TC) and subject-verb…

  1. Survey of marine natural product structure revisions: a synergy of spectroscopy and chemical synthesis

    PubMed Central

    Suyama, Takashi L.; Gerwick, William H.; McPhail, Kerry L.

    2011-01-01

    The structural assignment of new natural product molecules supports research in a multitude of disciplines that may lead to new therapeutic agents and or new understanding of disease biology. However, reports of numerous structural revisions, even of recently elucidated natural products, inspired the present survey of techniques used in structural misassignments and subsequent revisions in the context of constitutional or configurational errors. Given the comparatively recent development of marine natural products chemistry, coincident with the modern spectroscopy, it is of interest to consider the relative roles of spectroscopy and chemical synthesis in the structure elucidation and revision of those marine natural products which were initially misassigned. Thus, a tabulated review of all marine natural product structural revisions from 2005 to 2010 is organized according to structural motif revised. Misassignments of constitution are more frequent than perhaps anticipated by reliance on HMBC and other advanced NMR experiments, especially considering the full complement of all natural products. However, these techniques also feature prominently in structural revisions, specifically of marine natural products. Nevertheless, as is the case for revision of relative and absolute configuration, total synthesis is a proven partner for marine, as well as terrestrial, natural products structure elucidation. It also becomes apparent that considerable ‘detective work’ remains in structure elucidation, in spite of the spectacular advances in spectroscopic techniques. PMID:21715178

  2. Cognitive analysis as a way to understand students' problem-solving process in BODMAS rule

    NASA Astrophysics Data System (ADS)

    Ung, Ting Su; Kiong, Paul Lau Ngee; Manaf, Badron bin; Hamdan, Anniza Binti; Khium, Chen Chee

    2017-04-01

    Students tend to make lots of careless mistake during the process of mathematics solving. To facilitate effective learning, educators have to understand which cognitive processes are used by students and how these processes help them to solve problems. This paper is only aimed to determine the common errors in mathematics by pre-diploma students that took Intensive Mathematics I (MAT037) in UiTM Sarawak. Then, concentrate on the errors did by the students on the topic of BODMAS rule and the mental processes corresponding to these errors that been developed by students. One class of pre-diploma students taking MAT037 taught by the researchers was selected because they performed poorly in SPM mathematics. It is inevitable that they finished secondary education with many misconceptions in mathematics. The solution scripts for all the tutorials of the participants were collected. This study was predominately qualitative and the solution scripts were content analyzed to identify the common errors committed by the participants, and to generate possible mental processes to these errors. Selected students were interviewed by the researchers during the progress. BODMAS rule could be further divided into Numerical Simplification and Powers Simplification. Furthermore, the erroneous processes could be attributed to categories of Basic Arithmetic Rules, Negative Numbers and Powers.

  3. Combining task analysis and fault tree analysis for accident and incident analysis: a case study from Bulgaria.

    PubMed

    Doytchev, Doytchin E; Szwillus, Gerd

    2009-11-01

    Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.

  4. Analysis of Relationships between the Level of Errors in Leg and Monofin Movement and Stroke Parameters in Monofin Swimming

    PubMed Central

    Rejman, Marek

    2013-01-01

    The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742

  5. Investigation of empirical damping laws for the space shuttle

    NASA Technical Reports Server (NTRS)

    Bernstein, E. L.

    1973-01-01

    An analysis of dynamic test data from vibration testing of a number of aerospace vehicles was made to develop an empirical structural damping law. A systematic attempt was made to fit dissipated energy/cycle to combinations of all dynamic variables. The best-fit laws for bending, torsion, and longitudinal motion are given, with error bounds. A discussion and estimate are made of error sources. Programs are developed for predicting equivalent linear structural damping coefficients and finding the response of nonlinearly damped structures.

  6. Ethics in the Pediatric Emergency Department: When Mistakes Happen: An Approach to the Process, Evaluation, and Response to Medical Errors.

    PubMed

    Dreisinger, Naomi; Zapolsky, Nathan

    2017-02-01

    The emergency department (ED) is an environment that is conducive to medical errors. The ED is a time-pressured environment where physicians aim to rapidly evaluate and treat patients. Quick thinking and problem-based solutions are often used to assist in evaluation and diagnosis. Error analysis leads to an understanding of the cause of a medical error and is important to prevent future errors. Research suggests mechanisms to prevent medical errors in the pediatric ED, but prevention is not always possible. Transparency about errors is necessary to assure a trusting doctor-patient relationship. Patients want to be informed about all errors, and apologies are hard. Apologizing for a significant medical error that may have caused a complication is even harder. Having a systematic way to go about apologizing makes the process easier, and helps assure that the right information is relayed to the patient and his or her family. This creates an environment of autonomy and shared decision making that is ultimately beneficial to all aspects of patient care.

  7. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  8. Benchmarking observational uncertainties for hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.

    2013-12-01

    There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.

  9. Homogeneous Studies of Transiting Extrasolar Planets: Current Status and Future Plans

    NASA Astrophysics Data System (ADS)

    Taylor, John

    2011-09-01

    We now know of over 500 planets orbiting stars other than our Sun. The jewels in the crown are the transiting planets, for these are the only ones whose masses and radii are measurable. They are fundamental for our understanding of the formation, evolution, structure and atmospheric properties of extrasolar planets. However, their characterization is not straightforward, requiring extremely high-precision photometry and spectroscopy as well as input from theoretical stellar models. I summarize the motivation and current status of a project to measure the physical properties of all known transiting planetary systems using homogeneous techniques (Southworth 2008, 2009, 2010, 2011 in preparation). Careful attention is paid to the treatment of limb darkening, contaminating light, correlated noise, numerical integration, orbital eccentricity and orientation, systematic errors from theoretical stellar models, and empirical constraints. Complete error budgets are calculated for each system and can be used to determine which type of observation would be most useful for improving the parameter measurements. Known correlations between the orbital periods, masses, surface gravities, and equilibrium temperatures of transiting planets can be explored more safely due to the homogeneity of the properties. I give a sneak preview of Homogeneous Studies Paper 4, which includes the properties of thirty transiting planetary systems observed by the CoRoT, Kepler and Deep Impact space missions. Future opportunities are discussed, plus remaining problems with our understanding of transiting planets. I acknowledge funding from the UK STFC in the form of an Advanced Fellowship.

  10. Human error mitigation initiative (HEMI) : summary report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less

  11. Neuroticism and responsiveness to error feedback: adaptive self-regulation versus affective reactivity.

    PubMed

    Robinson, Michael D; Moeller, Sara K; Fetterman, Adam K

    2010-10-01

    Responsiveness to negative feedback has been seen as functional by those who emphasize the value of reflecting on such feedback in self-regulating problematic behaviors. On the other hand, the very same responsiveness has been viewed as dysfunctional by its link to punishment sensitivity and reactivity. The present 4 studies, involving 203 undergraduate participants, sought to reconcile such discrepant views in the context of the trait of neuroticism. In cognitive tasks, individuals were given error feedback when they made mistakes. It was found that greater tendencies to slow down following error feedback were associated with higher levels of accuracy at low levels of neuroticism but lower levels of accuracy at high levels of neuroticism. Individual differences in neuroticism thus appear crucial in understanding whether behavioral alterations following negative feedback reflect proactive versus reactive mechanisms and processes. Implications for understanding the processing basis of neuroticism and adaptive self-regulation are discussed.

  12. Utilizing semantic networks to database and retrieve generalized stochastic colored Petri nets

    NASA Technical Reports Server (NTRS)

    Farah, Jeffrey J.; Kelley, Robert B.

    1992-01-01

    Previous work has introduced the Planning Coordinator (PCOORD), a coordinator functioning within the hierarchy of the Intelligent Machine Mode. Within the structure of the Planning Coordinator resides the Primitive Structure Database (PSDB) functioning to provide the primitive structures utilized by the Planning Coordinator in the establishing of error recovery or on-line path plans. This report further explores the Primitive Structure Database and establishes the potential of utilizing semantic networks as a means of efficiently storing and retrieving the Generalized Stochastic Colored Petri Nets from which the error recovery plans are derived.

  13. Decadal-scale sensitivity of Northeast Greenland ice flow to errors in surface mass balance using ISSM

    NASA Astrophysics Data System (ADS)

    Schlegel, N.-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E.

    2013-06-01

    The behavior of the Greenland Ice Sheet, which is considered a major contributor to sea level changes, is best understood on century and longer time scales. However, on decadal time scales, its response is less predictable due to the difficulty of modeling surface climate, as well as incomplete understanding of the dynamic processes responsible for ice flow. Therefore, it is imperative to understand how modeling advancements, such as increased spatial resolution or more comprehensive ice flow equations, might improve projections of ice sheet response to climatic trends. Here we examine how a finely resolved climate forcing influences a high-resolution ice stream model that considers longitudinal stresses. We simulate ice flow using a two-dimensional Shelfy-Stream Approximation implemented within the Ice Sheet System Model (ISSM) and use uncertainty quantification tools embedded within the model to calculate the sensitivity of ice flow within the Northeast Greenland Ice Stream to errors in surface mass balance (SMB) forcing. Our results suggest that the model tends to smooth ice velocities even when forced with extreme errors in SMB. Indeed, errors propagate linearly through the model, resulting in discharge uncertainty of 16% or 1.9 Gt/yr. We find that mass flux is most sensitive to local errors but is also affected by errors hundreds of kilometers away; thus, an accurate SMB map of the entire basin is critical for realistic simulation. Furthermore, sensitivity analyses indicate that SMB forcing needs to be provided at a resolution of at least 40 km.

  14. Using goal- and grip-related information for understanding the correctness of other's actions: an ERP study.

    PubMed

    van Elk, Michiel; Bousardt, Roel; Bekkering, Harold; van Schie, Hein T

    2012-01-01

    Detecting errors in other's actions is of pivotal importance for joint action, competitive behavior and observational learning. Although many studies have focused on the neural mechanisms involved in detecting low-level errors, relatively little is known about error-detection in everyday situations. The present study aimed to identify the functional and neural mechanisms whereby we understand the correctness of other's actions involving well-known objects (e.g. pouring coffee in a cup). Participants observed action sequences in which the correctness of the object grasped and the grip applied to a pair of objects were independently manipulated. Observation of object violations (e.g. grasping the empty cup instead of the coffee pot) resulted in a stronger P3-effect than observation of grip errors (e.g. grasping the coffee pot at the upper part instead of the handle), likely reflecting a reorienting response, directing attention to the relevant location. Following the P3-effect, a parietal slow wave positivity was observed that persisted for grip-errors, likely reflecting the detection of an incorrect hand-object interaction. These findings provide new insight in the functional significance of the neurophysiological markers associated with the observation of incorrect actions and suggest that the P3-effect and the subsequent parietal slow wave positivity may reflect the detection of errors at different levels in the action hierarchy. Thereby this study elucidates the cognitive processes that support the detection of action violations in the selection of objects and grips.

  15. Human factors in surgery: from Three Mile Island to the operating room.

    PubMed

    D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco

    2009-01-01

    Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.

  16. A framework for software fault tolerance in real-time systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  17. On how to avoid input and structural uncertainties corrupt the inference of hydrological parameters using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Hernández, Mario R.; Francés, Félix

    2015-04-01

    One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.

  18. A review of uncertainty in in situ measurements and data sets of sea surface temperature

    NASA Astrophysics Data System (ADS)

    Kennedy, John J.

    2014-03-01

    Archives of in situ sea surface temperature (SST) measurements extend back more than 160 years. Quality of the measurements is variable, and the area of the oceans they sample is limited, especially early in the record and during the two world wars. Measurements of SST and the gridded data sets that are based on them are used in many applications so understanding and estimating the uncertainties are vital. The aim of this review is to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, it also aims to identify current gaps in understanding. Uncertainties arise at the level of individual measurements with both systematic and random effects and, although these have been extensively studied, refinement of the error models continues. Recent improvements have been made in the understanding of the pervasive systematic errors that affect the assessment of long-term trends and variability. However, the adjustments applied to minimize these systematic errors are uncertain and these uncertainties are higher before the 1970s and particularly large in the period surrounding the Second World War owing to a lack of reliable metadata. The uncertainties associated with the choice of statistical methods used to create globally complete SST data sets have been explored using different analysis techniques, but they do not incorporate the latest understanding of measurement errors, and they want for a fair benchmark against which their skill can be objectively assessed. These problems can be addressed by the creation of new end-to-end SST analyses and by the recovery and digitization of data and metadata from ship log books and other contemporary literature.

  19. DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases

    PubMed Central

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2015-01-01

    Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827

  20. Evaluation of circularity error in drilling of syntactic foam composites

    NASA Astrophysics Data System (ADS)

    Ashrith H., S.; Doddamani, Mrityunjay; Gaitonde, Vinayak

    2018-04-01

    Syntactic foams are widely used in structural applications of automobiles, aircrafts and underwater vehicles due to their lightweight properties combined with high compression strength and low moisture absorption. Structural application requires drilling of holes for assembly purpose. In this investigation response surface methodology based mathematical models are used to analyze the effects of cutting speed, feed, drill diameter and filler content on circularity error both at entry and exit level in drilling of glass microballoon reinforced epoxy syntactic foam. Experiments are conducted based on full factorial design using solid coated tungsten carbide twist drills. The parametric analysis reveals that circularity error is highly influenced by drill diameter followed by spindle speed at the entry and exit level. Parametric analysis also reveals that increasing filler content decreases circularity error by 13.65 and 11.96% respectively at entry and exit levels. Average circularity error at the entry level is found to be 23.73% higher than at the exit level.

  1. Error Tolerant Plan Recognition: An Empirical Investigation

    DTIC Science & Technology

    2015-05-01

    structure can differ drastically in semantics. For instance, a plan to travel to a grocery store to buy milk might coincidentally be structurally...algorithm for its ability to tolerate input errors, and that storing and leveraging state information in its plan representation substantially...proposed a novel representation for storing and organizing plans in a plan library, based on action-state pairs and abstract states. It counts the

  2. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  3. The spectrum of medical errors: when patients sue

    PubMed Central

    Kels, Barry D; Grant-Kels, Jane M

    2012-01-01

    Inarguably medical errors constitute a serious, dangerous, and expensive problem for the twenty-first-century US health care system. This review examines the incidence, nature, and complexity of alleged medical negligence and medical malpractice. The authors hope this will constitute a road map to medical providers so that they can better understand the present climate and hopefully avoid the “Scylla and Charybdis” of medical errors and medical malpractice. Despite some documented success in reducing medical errors, adverse events and medical errors continue to represent an indelible stain upon the practice, reputation, and success of the US health care industry. In that regard, what may be required to successfully attack the unacceptably high severity and volume of medical errors is a locally directed and organized initiative sponsored by individual health care organizations that is coordinated, supported, and guided by state and federal governmental and nongovernmental agencies. PMID:22924008

  4. Heuristics and Cognitive Error in Medical Imaging.

    PubMed

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  5. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  6. Development of an Ontology to Model Medical Errors, Information Needs, and the Clinical Communication Space

    PubMed Central

    Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.

    2002-01-01

    Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.

  7. Effectiveness of the surgical safety checklist in correcting errors: a literature review applying Reason's Swiss cheese model.

    PubMed

    Collins, Susan J; Newhouse, Robin; Porter, Jody; Talsma, AkkeNeel

    2014-07-01

    Approximately 2,700 patients are harmed by wrong-site surgery each year. The World Health Organization created the surgical safety checklist to reduce the incidence of wrong-site surgery. A project team conducted a narrative review of the literature to determine the effectiveness of the surgical safety checklist in correcting and preventing errors in the OR. Team members used Swiss cheese model of error by Reason to analyze the findings. Analysis of results indicated the effectiveness of the surgical checklist in reducing the incidence of wrong-site surgeries and other medical errors; however, checklists alone will not prevent all errors. Successful implementation requires perioperative stakeholders to understand the nature of errors, recognize the complex dynamic between systems and individuals, and create a just culture that encourages a shared vision of patient safety. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  8. A theoretical basis for the analysis of multiversion software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  9. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  10. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    NASA Astrophysics Data System (ADS)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  11. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese

    ERIC Educational Resources Information Center

    Saito, Akie; Inoue, Tomoyoshi

    2017-01-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the…

  12. Learning mechanisms to limit medication administration errors.

    PubMed

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  13. Separate Medication Preparation Rooms Reduce Interruptions and Medication Errors in the Hospital Setting: A Prospective Observational Study.

    PubMed

    Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja

    2016-12-21

    Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.

  14. Analysis of Errors Made by Students Solving Genetics Problems.

    ERIC Educational Resources Information Center

    Costello, Sandra Judith

    The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…

  15. Review of current GPS methodologies for producing accurate time series and their error sources

    NASA Astrophysics Data System (ADS)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e.g., subsidence of the highway bridge) to the detection of particular geophysical signals.

  16. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  17. Humans make efficient use of natural image statistics when performing spatial interpolation.

    PubMed

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  18. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    PubMed

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Plan for Quality to Improve Patient Safety at the Point of Care

    PubMed Central

    Ehrmeyer, Sharon S.

    2011-01-01

    The U.S. Institute of Medicine (IOM) much publicized report in “To Err is Human” (2000, National Academy Press) stated that as many as 98 000 hospitalized patients in the U.S. die each year due to preventable medical errors. This revelation about medical error and patient safety focused the public and the medical community's attention on errors in healthcare delivery including laboratory and point-of-care-testing (POCT). Errors introduced anywhere in the POCT process clearly can impact quality and place patient's safety at risk. While POCT performed by or near the patient reduces the potential of some errors, the process presents many challenges to quality with its multiple tests sites, test menus, testing devices and non-laboratory analysts, who often have little understanding of quality testing. Incoherent or no regulations and the rapid availability of test results for immediate clinical intervention can further amplify errors. System planning and management of the entire POCT process are essential to reduce errors and improve quality and patient safety. PMID:21808107

  20. Derivation of error sources for experimentally derived heliostat shapes

    NASA Astrophysics Data System (ADS)

    Cumpston, Jeff; Coventry, Joe

    2017-06-01

    Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.

  1. The Structure of Segmental Errors in the Speech of Deaf Children.

    ERIC Educational Resources Information Center

    Levitt, H.; And Others

    1980-01-01

    A quantitative description of the segmental errors occurring in the speech of deaf children is developed. Journal availability: Elsevier North Holland, Inc., 52 Vanderbilt Avenue, New York, NY 10017. (Author)

  2. Improved pKa Prediction of Substituted Alcohols, Phenols, and Hydroperoxides in Aqueous Medium Using Density Functional Theory and a Cluster-Continuum Solvation Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2017-06-22

    Acid dissociation constants (pK a 's) are key physicochemical properties that are needed to understand the structure and reactivity of molecules in solution. Theoretical pK a 's have been calculated for a set of 72 organic compounds with -OH and -OOH groups (48 with known experimental pK a 's). This test set includes 17 aliphatic alcohols, 25 substituted phenols, and 30 hydroperoxides. Calculations in aqueous medium have been carried out with SMD implicit solvation and three hybrid DFT functionals (B3LYP, ωB97XD, and M06-2X) with two basis sets (6-31+G(d,p) and 6-311++G(d,p)). The effect of explicit water molecules on calculated pK a 's was assessed by including up to three water molecules. pK a 's calculated with only SMD implicit solvation are found to have average errors greater than 6 pK a units. Including one explicit water reduces the error by about 3 pK a units, but the error is still far from chemical accuracy. With B3LYP/6-311++G(d,p) and three explicit water molecules in SMD solvation, the mean signed error and standard deviation are only -0.02 ± 0.55; a linear fit with zero intercept has a slope of 1.005 and R 2 = 0.97. Thus, this level of theory can be used to calculate pK a 's directly without the need for linear correlations or thermodynamic cycles. Estimated pK a values are reported for 24 hydroperoxides that have not yet been determined experimentally.

  3. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  4. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  5. The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals

    NASA Astrophysics Data System (ADS)

    Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat

    2018-01-01

    Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.

  6. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  7. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  8. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization

    PubMed Central

    Ambridge, Ben; Bidgood, Amy; Twomey, Katherine E.; Pine, Julian M.; Rowland, Caroline F.; Freudenthal, Daniel

    2015-01-01

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition. PMID:25919003

  10. Preemption versus Entrenchment: Towards a Construction-General Solution to the Problem of the Retreat from Verb Argument Structure Overgeneralization.

    PubMed

    Ambridge, Ben; Bidgood, Amy; Twomey, Katherine E; Pine, Julian M; Rowland, Caroline F; Freudenthal, Daniel

    2014-01-01

    Participants aged 5;2-6;8, 9;2-10;6 and 18;1-22;2 (72 at each age) rated verb argument structure overgeneralization errors (e.g., *Daddy giggled the baby) using a five-point scale. The study was designed to investigate the feasibility of two proposed construction-general solutions to the question of how children retreat from, or avoid, such errors. No support was found for the prediction of the preemption hypothesis that the greater the frequency of the verb in the single most nearly synonymous construction (for this example, the periphrastic causative; e.g., Daddy made the baby giggle), the lower the acceptability of the error. Support was found, however, for the prediction of the entrenchment hypothesis that the greater the overall frequency of the verb, regardless of construction, the lower the acceptability of the error, at least for the two older groups. Thus while entrenchment appears to be a robust solution to the problem of the retreat from error, and one that generalizes across different error types, we did not find evidence that this is the case for preemption. The implication is that the solution to the retreat from error lies not with specialized mechanisms, but rather in a probabilistic process of construction competition.

  11. Making Sense of Dynamic Systems: How Our Understanding of Stocks and Flows Depends on a Global Perspective.

    PubMed

    Fischer, Helen; Gonzalez, Cleotilde

    2016-03-01

    Stocks and flows (SF) are building blocks of dynamic systems: Stocks change through inflows and outflows, such as our bank balance changing with withdrawals and deposits, or atmospheric CO2 with absorptions and emissions. However, people make systematic errors when trying to infer the behavior of dynamic systems, termed SF failure, whose cognitive explanations are yet unknown. We argue that SF failure appears when people focus on specific system elements (local processing), rather than on the system structure and gestalt (global processing). Using a standard SF task (n = 148), SF failure decreased by (a) a global as opposed to local task format; (b) individual global as opposed to local processing styles; and (c) global as opposed to local perceptual priming. These results converge toward local processing as an explanation for SF failure. We discuss theoretical and practical implications on the connections between the scope of attention and understanding of dynamic systems. Copyright © 2015 Cognitive Science Society, Inc.

  12. Metabolomics and systems pharmacology: why and how to model the human metabolic network for drug discovery☆

    PubMed Central

    Kell, Douglas B.; Goodacre, Royston

    2014-01-01

    Metabolism represents the ‘sharp end’ of systems biology, because changes in metabolite concentrations are necessarily amplified relative to changes in the transcriptome, proteome and enzyme activities, which can be modulated by drugs. To understand such behaviour, we therefore need (and increasingly have) reliable consensus (community) models of the human metabolic network that include the important transporters. Small molecule ‘drug’ transporters are in fact metabolite transporters, because drugs bear structural similarities to metabolites known from the network reconstructions and from measurements of the metabolome. Recon2 represents the present state-of-the-art human metabolic network reconstruction; it can predict inter alia: (i) the effects of inborn errors of metabolism; (ii) which metabolites are exometabolites, and (iii) how metabolism varies between tissues and cellular compartments. However, even these qualitative network models are not yet complete. As our understanding improves so do we recognise more clearly the need for a systems (poly)pharmacology. PMID:23892182

  13. Spectra as windows into exoplanet atmospheres

    PubMed Central

    Burrows, Adam S.

    2014-01-01

    Understanding a planet’s atmosphere is a necessary condition for understanding not only the planet itself, but also its formation, structure, evolution, and habitability. This requirement puts a premium on obtaining spectra and developing credible interpretative tools with which to retrieve vital planetary information. However, for exoplanets, these twin goals are far from being realized. In this paper, I provide a personal perspective on exoplanet theory and remote sensing via photometry and low-resolution spectroscopy. Although not a review in any sense, this paper highlights the limitations in our knowledge of compositions, thermal profiles, and the effects of stellar irradiation, focusing on, but not restricted to, transiting giant planets. I suggest that the true function of the recent past of exoplanet atmospheric research has been not to constrain planet properties for all time, but to train a new generation of scientists who, by rapid trial and error, are fast establishing a solid future foundation for a robust science of exoplanets. PMID:24613929

  14. New paradigm for understanding in-flight decision making errors: a neurophysiological model leveraging human factors.

    PubMed

    Souvestre, P A; Landrock, C K; Blaber, A P

    2008-08-01

    Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.

  15. Factors affecting nursing students' intention to report medication errors: An application of the theory of planned behavior.

    PubMed

    Ben Natan, Merav; Sharon, Ira; Mahajna, Marlen; Mahajna, Sara

    2017-11-01

    Medication errors are common among nursing students. Nonetheless, these errors are often underreported. To examine factors related to nursing students' intention to report medication errors, using the Theory of Planned Behavior, and to examine whether the theory is useful in predicting students' intention to report errors. This study has a descriptive cross-sectional design. Study population was recruited in a university and a large nursing school in central and northern Israel. A convenience sample of 250 nursing students took part in the study. The students completed a self-report questionnaire, based on the Theory of Planned Behavior. The findings indicate that students' intention to report medication errors was high. The Theory of Planned Behavior constructs explained 38% of variance in students' intention to report medication errors. The constructs of behavioral beliefs, subjective norms, and perceived behavioral control were found as affecting this intention, while the most significant factor was behavioral beliefs. The findings also reveal that students' fear of the reaction to disclosure of the error from superiors and colleagues may impede them from reporting the error. Understanding factors related to reporting medication errors is crucial to designing interventions that foster error reporting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Commission errors of active intentions: the roles of aging, cognitive load, and practice.

    PubMed

    Boywitt, C Dennis; Rummel, Jan; Meiser, Thorsten

    2015-01-01

    Performing an intended action when it needs to be withheld, for example, when temporarily prescribed medication is incompatible with the other medication, is referred to as commission errors of prospective memory (PM). While recent research indicates that older adults are especially prone to commission errors for finished intentions, there is a lack of research on the effects of aging on commission errors for still active intentions. The present research investigates conditions which might contribute to older adults' propensity to perform planned intentions under inappropriate conditions. Specifically, disproportionally higher rates of commission errors for still active intentions were observed in older than in younger adults with both salient (Experiment 1) and non-salient (Experiment 2) target cues. Practicing the PM task in Experiment 2, however, helped execution of the intended action in terms of higher PM performance at faster ongoing-task response times but did not increase the rate of commission errors. The results have important implications for the understanding of older adults' PM commission errors and the processes involved in these errors.

  17. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  18. Measurement of interaction between antiprotons

    DOE PAGES

    Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; ...

    2015-11-04

    In this paper, one of the primary goals of nuclear physics is to understand the force between nucleons, which is a necessary step for understanding the structure of nuclei and how nuclei interact with each other. Rutherford discovered the atomic nucleus in 1911, and the large body of knowledge about the nuclear force that has since been acquired was derived from studies made on nucleons or nuclei. Although antinuclei up to antihelium-4 have been discovered and their masses measured, little is known directly about the nuclear force between antinucleons. Here, we study antiproton pair correlations among data collected by themore » STAR experiment at the Relativistic Heavy Ion Collider (RHIC), where gold ions are collided with a centre-of-mass energy of 200 gigaelectronvolts per nucleon pair. Antiprotons are abundantly produced in such collisions, thus making it feasible to study details of the antiproton–antiproton interaction. By applying a technique similar to Hanbury Brown and Twiss intensity interferometry, we show that the force between two antiprotons is attractive. In addition, we report two key parameters that characterize the corresponding strong interaction: the scattering length and the effective range of the interaction. Our measured parameters are consistent within errors with the corresponding values for proton–proton interactions. Our results provide direct information on the interaction between two antiprotons, one of the simplest systems of antinucleons, and so are fundamental to understanding the structure of more-complex antinuclei and their properties.« less

  19. Uncertainty of Passive Imager Cloud Optical Property Retrievals to Instrument Radiometry and Model Assumptions: Examples from MODIS

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Meyer, Kerry; Amarasinghe, Nandana; Arnold, G. Thomas; Zhang, Zhibo; King, Michael D.

    2013-01-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global-daily 1 km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VISNIR channel paired with a 1.6, 2.1, and 3.7 m spectral channel. The MOD06 forward model is derived from on a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1 aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. I n Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 m band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 m, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  20. Uncertainty of passive imager cloud retrievals to instrument radiometry and model assumptions: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Amarasinghe, N.; Arnold, G. T.; Zhang, Z.; Meyer, K.; King, M. D.

    2013-12-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VIS/NIR channel paired with a 1.6, 2.1, and 3.7 μm spectral channel. The MOD06 forward model is derived from a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1° aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. In Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 μm band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 μm, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  1. The thinking doctor: clinical decision making in contemporary medicine.

    PubMed

    Trimble, Michael; Hamilton, Paul

    2016-08-01

    Diagnostic errors are responsible for a significant number of adverse events. Logical reasoning and good decision-making skills are key factors in reducing such errors, but little emphasis has traditionally been placed on how these thought processes occur, and how errors could be minimised. In this article, we explore key cognitive ideas that underpin clinical decision making and suggest that by employing some simple strategies, physicians might be better able to understand how they make decisions and how the process might be optimised. © 2016 Royal College of Physicians.

  2. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  3. Software errors and complexity: An empirical investigation

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Perricone, B. T.

    1982-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  4. The finer points of writing and refereeing scientific articles.

    PubMed

    Bain, Barbara J; Littlewood, Tim J; Szydlo, Richard M

    2016-02-01

    Writing scientific papers is a skill required by all haematologists. Many also need to be able to referee papers submitted to journals. These skills are not often formally taught and as a result may not be done well. We have reviewed published evidence of errors in these processes. Such errors may be ethical, scientific or linguistic, or may result from a lack of understanding of the processes. The objective of the review is, by highlighting errors, to help writers and referees to avoid them. © 2016 John Wiley & Sons Ltd.

  5. THERP and HEART integrated methodology for human error assessment

    NASA Astrophysics Data System (ADS)

    Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio

    2015-11-01

    THERP and HEART integrated methodology is proposed to investigate accident scenarios that involve operator errors during high-dose-rate (HDR) treatments. The new approach has been modified on the basis of fuzzy set concept with the aim of prioritizing an exhaustive list of erroneous tasks that can lead to patient radiological overexposures. The results allow for the identification of human errors that are necessary to achieve a better understanding of health hazards in the radiotherapy treatment process, so that it can be properly monitored and appropriately managed.

  6. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  7. A Case of Error Disclosure: A Communication Privacy Management Analysis

    PubMed Central

    Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.

    2013-01-01

    To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information, unwittingly following conversational scripts that convey misleading messages, and the difficulty in regulating privacy boundaries in the stressful circumstances that occur with error disclosures. As a consequence, the potential contribution to public health is the ability to more clearly see the significance of the disclosure process that has implications for many public health issues. PMID:25170501

  8. Transferring Error Characteristics of Satellite Rainfall Data from Ground Validation (gauged) into Non-ground Validation (ungauged)

    NASA Astrophysics Data System (ADS)

    Tang, L.; Hossain, F.

    2009-12-01

    Understanding the error characteristics of satellite rainfall data at different spatial/temporal scales is critical, especially when the scheduled Global Precipitation Mission (GPM) plans to provide High Resolution Precipitation Products (HRPPs) at global scales. Satellite rainfall data contain errors which need ground validation (GV) data for characterization, while satellite rainfall data will be most useful in the regions that are lacking in GV. Therefore, a critical step is to develop a spatial interpolation scheme for transferring the error characteristics of satellite rainfall data from GV regions to Non-GV regions. As a prelude to GPM, The TRMM Multi-satellite Precipitation Analysis (TMPA) products of 3B41RT and 3B42RT (Huffman et al., 2007) over the US spanning a record of 6 years are used as a representative example of satellite rainfall data. Next Generation Radar (NEXRAD) Stage IV rainfall data are used as the reference for GV data. Initial work by the authors (Tang et al., 2009, GRL) has shown promise in transferring error from GV to Non-GV regions, based on a six-year climatologic average of satellite rainfall data assuming only 50% of GV coverage. However, this transfer of error characteristics needs to be investigated for a range of GV data coverage. In addition, it is also important to investigate if proxy-GV data from an accurate space-borne sensor, such as the TRMM PR (or the GPM DPR), can be leveraged for the transfer of error at sparsely gauged regions. The specific question we ask in this study is, “what is the minimum coverage of GV data required for error transfer scheme to be implemented at acceptable accuracy in hydrological relevant scale?” Three geostatistical interpolation methods are compared: ordinary kriging, indicator kriging and disjunctive kriging. Various error metrics are assessed for transfer such as, Probability of Detection for rain and no rain, False Alarm Ratio, Frequency Bias, Critical Success Index, RMSE etc. Understanding the proper space-time scales at which these metrics can be reasonably transferred is also explored in this study. Keyword: Satellite rainfall, error transfer, spatial interpolation, kriging methods.

  9. Fundamental Studies of Crystal Growth of Microporous Materials

    NASA Technical Reports Server (NTRS)

    Dutta, P.; George, M.; Ramachandran, N.; Schoeman, B.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    Microporous materials are framework structures with well-defined porosity, often of molecular dimensions. Zeolites contain aluminum and silicon atoms in their framework and are the most extensively studied amongst all microporous materials. Framework structures with P, Ga, Fe, Co, Zn, B, Ti and a host of other elements have also been made. Typical synthesis of microporous materials involve mixing the framework elements (or compounds, thereof) in a basic solution, followed by aging in some cases and then heating at elevated temperatures. This process is termed hydrothermal synthesis, and involves complex chemical and physical changes. Because of a limited understanding of this process, most synthesis advancements happen by a trial and error approach. There is considerable interest in understanding the synthesis process at a molecular level with the expectation that eventually new framework structures will be built by design. The basic issues in the microporous materials crystallization process include: (1) Nature of the molecular units responsible for the crystal nuclei formation; (2) Nature of the nuclei and nucleation process; (3) Growth process of the nuclei into crystal; (4) Morphological control and size of the resulting crystal; (5) Surface structure of the resulting crystals; (6) Transformation of frameworks into other frameworks or condensed structures. The NASA-funded research described in this report focuses to varying degrees on all of the above issues and has been described in several publications. Following is the presentation of the highlights of our current research program. The report is divided into five sections: (1) Fundamental aspects of the crystal growth process; (2) Morphological and Surface properties of crystals; (3) Crystal dissolution and transformations; (4) Modeling of Crystal Growth; (5) Relevant Microgravity Experiments.

  10. Re-evaluation of low-resolution crystal structures via interactive molecular-dynamics flexible fitting (iMDFF): a case study in complement C4.

    PubMed

    Croll, Tristan Ian; Andersen, Gregers Rom

    2016-09-01

    While the rapid proliferation of high-resolution structures in the Protein Data Bank provides a rich set of templates for starting models, it remains the case that a great many structures both past and present are built at least in part by hand-threading through low-resolution and/or weak electron density. With current model-building tools this task can be challenging, and the de facto standard for acceptable error rates (in the form of atomic clashes and unfavourable backbone and side-chain conformations) in structures based on data with dmax not exceeding 3.5 Å reflects this. When combined with other factors such as model bias, these residual errors can conspire to make more serious errors in the protein fold difficult or impossible to detect. The three recently published 3.6-4.2 Å resolution structures of complement C4 (PDB entries 4fxg, 4fxk and 4xam) rank in the top quartile of structures of comparable resolution both in terms of Rfree and MolProbity score, yet, as shown here, contain register errors in six β-strands. By applying a molecular-dynamics force field that explicitly models interatomic forces and hence excludes most physically impossible conformations, the recently developed interactive molecular-dynamics flexible fitting (iMDFF) approach significantly reduces the complexity of the conformational space to be searched during manual rebuilding. This substantially improves the rate of detection and correction of register errors, and allows user-guided model building in maps with a resolution lower than 3.5 Å to converge to solutions with a stereochemical quality comparable to atomic resolution structures. Here, iMDFF has been used to individually correct and re-refine these three structures to MolProbity scores of <1.7, and strategies for working with such challenging data sets are suggested. Notably, the improved model allowed the resolution for complement C4b to be extended from 4.2 to 3.5 Å as demonstrated by paired refinement.

  11. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A brief review: history to understand fundamentals of electrocardiography

    PubMed Central

    AlGhatrif, Majd; Lindsay, Joseph

    2012-01-01

    The last decade of the 19th century witnessed the rise of a new era in which physicians used technology along with classical history taking and physical examination for the diagnosis of heart disease. The introduction of chest x-rays and the electrocardiograph (electrocardiogram) provided objective information about the structure and function of the heart. In the first half of the 20th century, a number of innovative individuals set in motion a fascinating sequence of discoveries and inventions that led to the 12-lead electrocardiogram, as we know it now. Electrocardiography, nowadays, is an essential part of the initial evaluation for patients presenting with cardiac complaints. As a first line diagnostic tool, health care providers at different levels of training and expertise frequently find it imperative to interpret electrocardiograms. It is likely that an understanding of the electrical basis of electrocardiograms would reduce the likelihood of error. An understanding of the disorders behind electrocardiographic phenomena could reduce the need for memorizing what may seem to be an endless list of patterns. In this article, we will review the important steps in the evolution of electrocardiogram. As is the case in most human endeavors, an understanding of history enables one to deal effectively with the present. PMID:23882360

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labby, Z.

    Physicists are often expected to have a solid grounding in experimental design and statistical analysis, sometimes filling in when biostatisticians or other experts are not available for consultation. Unfortunately, graduate education on these topics is seldom emphasized and few opportunities for continuing education exist. Clinical physicists incorporate new technology and methods into their practice based on published literature. A poor understanding of experimental design and analysis could Result in inappropriate use of new techniques. Clinical physicists also improve current practice through quality initiatives that require sound experimental design and analysis. Academic physicists with a poor understanding of design and analysismore » may produce ambiguous (or misleading) results. This can Result in unnecessary rewrites, publication rejection, and experimental redesign (wasting time, money, and effort). This symposium will provide a practical review of error and uncertainty, common study designs, and statistical tests. Instruction will primarily focus on practical implementation through examples and answer questions such as: where would you typically apply the test/design and where is the test/design typically misapplied (i.e., common pitfalls)? An analysis of error and uncertainty will also be explored using biological studies and associated modeling as a specific use case. Learning Objectives: Understand common experimental testing and clinical trial designs, what questions they can answer, and how to interpret the results Determine where specific statistical tests are appropriate and identify common pitfalls Understand the how uncertainty and error are addressed in biological testing and associated biological modeling.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernal, José Luis; Cuesta, Antonio J.; Verde, Licia, E-mail: joseluis.bernal@icc.ub.edu, E-mail: liciaverde@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu

    We perform an empirical consistency test of General Relativity/dark energy by disentangling expansion history and growth of structure constraints. We replace each late-universe parameter that describes the behavior of dark energy with two meta-parameters: one describing geometrical information in cosmological probes, and the other controlling the growth of structure. If the underlying model (a standard wCDM cosmology with General Relativity) is correct, that is under the null hypothesis, the two meta-parameters coincide. If they do not, it could indicate a failure of the model or systematics in the data. We present a global analysis using state-of-the-art cosmological data sets whichmore » points in the direction that cosmic structures prefer a weaker growth than that inferred by background probes. This result could signify inconsistencies of the model, the necessity of extensions to it or the presence of systematic errors in the data. We examine all these possibilities. The fact that the result is mostly driven by a specific sub-set of galaxy clusters abundance data, points to the need of a better understanding of this probe.« less

  15. Inverted initial conditions: Exploring the growth of cosmic structure and voids

    DOE PAGES

    Pontzen, Andrew; Roth, Nina; Peiris, Hiranya V.; ...

    2016-05-18

    We introduce and explore “paired” cosmological simulations. A pair consists of an A and B simulation with initial conditions related by the inversion δ A(x,t initial) = –δ B(x,t initial) (underdensities substituted for overdensities and vice versa). We argue that the technique is valuable for improving our understanding of cosmic structure formation. The A and B fields are by definition equally likely draws from ΛCDM initial conditions, and in the linear regime evolve identically up to the overall sign. As nonlinear evolution takes hold, a region that collapses to form a halo in simulation A will tend to expand tomore » create a void in simulation B. Applications include (i) contrasting the growth of A-halos and B-voids to test excursion-set theories of structure formation, (ii) cross-correlating the density field of the A and B universes as a novel test for perturbation theory, and (iii) canceling error terms by averaging power spectra between the two boxes. Furthermore, generalizations of the method to more elaborate field transformations are suggested.« less

  16. Software reliability: Application of a reliability model to requirements error analysis

    NASA Technical Reports Server (NTRS)

    Logan, J.

    1980-01-01

    The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.

  17. The contributions of human factors on human error in Malaysia aviation maintenance industries

    NASA Astrophysics Data System (ADS)

    Padil, H.; Said, M. N.; Azizan, A.

    2018-05-01

    Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.

  18. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  19. Filament winding technique, experiment and simulation analysis on tubular structure

    NASA Astrophysics Data System (ADS)

    Quanjin, Ma; Rejab, M. R. M.; Kaige, Jiang; Idris, M. S.; Harith, M. N.

    2018-04-01

    Filament winding process has emerged as one of the potential composite fabrication processes with lower costs. Filament wound products involve classic axisymmetric parts (pipes, rings, driveshafts, high-pressure vessels and storage tanks), non-axisymmetric parts (prismatic nonround sections and pipe fittings). Based on the 3-axis filament winding machine has been designed with the inexpensive control system, it is completely necessary to make a relative comparison between experiment and simulation on tubular structure. In this technical paper, the aim of this paper is to perform a dry winding experiment using the 3-axis filament winding machine and simulate winding process on the tubular structure using CADWIND software with 30°, 45°, 60° winding angle. The main result indicates that the 3-axis filament winding machine can produce tubular structure with high winding pattern performance with different winding angle. This developed 3-axis winding machine still has weakness compared to CAWIND software simulation results with high axes winding machine about winding pattern, turnaround impact, process error, thickness, friction impact etc. In conclusion, the 3-axis filament winding machine improvements and recommendations come up with its comparison results, which can intuitively understand its limitations and characteristics.

  20. An automated curation procedure for addressing chemical errors and inconsistencies in public datasets used in QSAR modelling.

    PubMed

    Mansouri, K; Grulke, C M; Richard, A M; Judson, R S; Williams, A J

    2016-11-01

    The increasing availability of large collections of chemical structures and associated experimental data provides an opportunity to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experimental data. Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publicly available PHYSPROP physicochemical properties and environmental fate datasets. The workflow first assembles structure-identity pairs using up to four provided chemical identifiers, including chemical name, CASRNs, SMILES, and MolBlock. Problems detected included errors and mismatches in chemical structure formats, identifiers and various structure validation issues, including hypervalency and stereochemistry descriptions. Subsequently, a machine learning procedure was applied to evaluate the impact of this curation process. The performance of QSAR models built on only the highest-quality subset of the original dataset was compared with the larger curated and corrected dataset. The latter showed statistically improved predictive performance. The final workflow was used to curate the full list of PHYSPROP datasets, and is being made publicly available for further usage and integration by the scientific community.

  1. Prediction of protein tertiary structure from sequences using a very large back-propagation neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X.; Wilcox, G.L.

    1993-12-31

    We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less

  2. Improved volumetric measurement of brain structure with a distortion correction procedure using an ADNI phantom.

    PubMed

    Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi

    2013-06-01

    Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.

  3. Research in structures, structural dynamics and materials, 1989

    NASA Technical Reports Server (NTRS)

    Hunter, William F. (Compiler); Noor, Ahmed K. (Compiler)

    1989-01-01

    Topics addressed include: composite plates; buckling predictions; missile launch tube modeling; structural/control systems design; optimization of nonlinear R/C frames; error analysis for semi-analytic displacement; crack acoustic emission; and structural dynamics.

  4. Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation

    NASA Astrophysics Data System (ADS)

    Zhou, XueFei

    2018-04-01

    With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.

  5. Structure and Management of an Engineering Senior Design Course.

    PubMed

    Tanaka, Martin L; Fischer, Kenneth J

    2016-07-01

    The design of products and processes is an important area in engineering. Students in engineering schools learn fundamental principles in their courses but often lack an opportunity to apply these methods to real-world problems until their senior year. This article describes important elements that should be incorporated into a senior capstone design course. It includes a description of the general principles used in engineering design and a discussion of why students often have difficulty with application and revert to trial and error methods. The structure of a properly designed capstone course is dissected and its individual components are evaluated. Major components include assessing resources, identifying projects, establishing teams, understanding requirements, developing conceptual designs, creating detailed designs, building prototypes, testing performance, and final presentations. In addition to the course design, team management and effective mentoring are critical to success. This article includes suggested guidelines and tips for effective design team leadership, attention to detail, investment of time, and managing project scope. Furthermore, the importance of understanding business culture, displaying professionalism, and considerations of different types of senior projects is discussed. Through a well-designed course and proper mentoring, students will learn to apply their engineering skills and gain basic business knowledge that will prepare them for entry-level positions in industry.

  6. Thermal effects on electronic properties of CO/Pt(111) in water.

    PubMed

    Duan, Sai; Xu, Xin; Luo, Yi; Hermansson, Kersti; Tian, Zhong-Qun

    2013-08-28

    Structure and adsorption energy of carbon monoxide molecules adsorbed on the Pt(111) surfaces with various CO coverages in water as well as work function of the whole systems at room temperature of 298 K were studied by means of a hybrid method that combines classical molecular dynamics and density functional theory. We found that when the coverage of CO is around half monolayer, i.e. 50%, there is no obvious peak of the oxygen density profile appearing in the first water layer. This result reveals that, in this case, the external force applied to water molecules from the CO/Pt(111) surface almost vanishes as a result of the competitive adsorption between CO and water molecules on the Pt(111) surface. This coverage is also the critical point of the wetting/non-wetting conditions for the CO/Pt(111) surface. Averaged work function and adsorption energy from current simulations are consistent with those of previous studies, which show that thermal average is required for direct comparisons between theoretical predictions and experimental measurements. Meanwhile, the statistical behaviors of work function and adsorption energy at room temperature have also been calculated. The standard errors of the calculated work function for the water-CO/Pt(111) interfaces are around 0.6 eV at all CO coverages, while the standard error decreases from 1.29 to 0.05 eV as the CO coverage increases from 4% to 100% for the calculated adsorption energy. Moreover, the critical points for these electronic properties are the same as those for the wetting/non-wetting conditions. These findings provide a better understanding about the interfacial structure under specific adsorption conditions, which can have important applications on the structure of electric double layers and therefore offer a useful perspective for the design of the electrochemical catalysts.

  7. Investigating mode errors on automated flight decks: illustrating the problem-driven, cumulative, and interdisciplinary nature of human factors research.

    PubMed

    Sarter, Nadine

    2008-06-01

    The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.

  8. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  9. Applied Strategies for Improving Patient Safety: A Comprehensive Process To Improve Care in Rural and Frontier Communities

    ERIC Educational Resources Information Center

    Westfall, John M.; Fernald, Douglas H.; Staton, Elizabeth W.; VanVorst, Rebecca; West, David; Pace, Wilson D.

    2004-01-01

    Medical errors and patient safety have gained increasing attention throughout all areas of medical care. Understanding patient safety in rural settings is crucial for improving care in rural communities. To describe a system to decrease medical errors and improve care in rural and frontier primary care offices. Applied Strategies for Improving…

  10. Individual differences in political ideology are effects of adaptive error management.

    PubMed

    Petersen, Michael Bang; Aarøe, Lene

    2014-06-01

    We apply error management theory to the analysis of individual differences in the negativity bias and political ideology. Using principles from evolutionary psychology, we propose a coherent theoretical framework for understanding (1) why individuals differ in their political ideology and (2) the conditions under which these individual differences influence and fail to influence the political choices people make.

  11. Semantic Memory Is Key to Binding Phonology: Converging Evidence from Immediate Serial Recall in Semantic Dementia and Healthy Participants

    ERIC Educational Resources Information Center

    Hoffman, Paul; Jefferies, Elizabeth; Ehsan, Sheeba; Jones, Roy W.; Lambon Ralph, Matthew A.

    2009-01-01

    Patients with semantic dementia (SD) make numerous phoneme migration errors when recalling lists of words they no longer fully understand, suggesting that word meaning makes a critical contribution to phoneme binding in verbal short-term memory. Healthy individuals make errors that appear similar when recalling lists of nonwords, which also lack…

  12. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  13. Achievement Error Differences of Students with Reading versus Math Disorders

    ERIC Educational Resources Information Center

    Avitia, Maria; DeBiase, Emily; Pagirsky, Matthew; Root, Melissa M.; Howell, Meiko; Pan, Xingyu; Knupp, Tawnya; Liu, Xiaochen

    2017-01-01

    The purpose of this study was to understand and compare the types of errors students with a specific learning disability in reading and/or writing (SLD-R/W) and those with a specific learning disability in math (SLD-M) made in the areas of reading, writing, language, and mathematics. Clinical samples were selected from the norming population of…

  14. Using Edit Distance to Analyse Errors in a Natural Language to Logic Translation Corpus

    ERIC Educational Resources Information Center

    Barker-Plummer, Dave; Dale, Robert; Cox, Richard; Romanczuk, Alex

    2012-01-01

    We have assembled a large corpus of student submissions to an automatic grading system, where the subject matter involves the translation of natural language sentences into propositional logic. Of the 2.3 million translation instances in the corpus, 286,000 (approximately 12%) are categorized as being in error. We want to understand the nature of…

  15. Beyond the Mask: Analysis of Error Patterns on the KTEA-3 for Students with Giftedness and Learning Disabilities

    ERIC Educational Resources Information Center

    Ottone-Cross, Karen L.; Dulong-Langley, Susan; Root, Melissa M.; Gelbar, Nicholas; Bray, Melissa A.; Luria, Sarah R.; Choi, Dowon; Kaufman, James C.; Courville, Troy; Pan, Xingyu

    2017-01-01

    An understanding of the strengths, weaknesses, and achievement profiles of students with giftedness and learning disabilities (G&LD) is needed to address their asynchronous development. This study examines the subtests and error factors in the Kaufman Test of Educational Achievement--Third Edition (KTEA-3) for strength and weakness patterns of…

  16. On P values and effect modification.

    PubMed

    Mayer, Martin

    2017-12-01

    A crucial element of evidence-based healthcare is the sound understanding and use of statistics. As part of instilling sound statistical knowledge and practice, it seems useful to highlight instances of unsound statistical reasoning or practice, not merely in captious or vitriolic spirit, but rather, to use such error as a springboard for edification by giving tangibility to the concepts at hand and highlighting the importance of avoiding such error. This article aims to provide an instructive overview of two key statistical concepts: effect modification and P values. A recent article published in the Journal of the American College of Cardiology on side effects related to statin therapy offers a notable example of errors in understanding effect modification and P values, and although not so critical as to entirely invalidate the article, the errors still demand considerable scrutiny and correction. In doing so, this article serves as an instructive overview of the statistical concepts of effect modification and P values. Judicious handling of statistics is imperative to avoid muddying their utility. This article contributes to the body of literature aiming to improve the use of statistics, which in turn will help facilitate evidence appraisal, synthesis, translation, and application.

  17. Medication errors reported to the National Medication Error Reporting System in Malaysia: a 4-year retrospective review (2009 to 2012).

    PubMed

    Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M

    2016-12-01

    Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.

  18. Social Security Is Fair to All Generations: Demystifying the Trust Fund, Solvency, and the Promise to Younger Americans.

    PubMed

    Buchanan, Neil H

    The Social Security system has come under attack for having illegitimately transferred wealth from younger generations to the Baby Boom generation. This attack is unfounded, because it fails to understand how the system was altered in order to force the Baby Boomers to finance their own benefits in retirement. Any challenges that Social Security now faces are not caused by the pay-as-you-go structure of the system but by Baby Boomers' other policy errors, especially the emergence of extreme economic inequality since 1980. Attempting to fix the wrong problem all but guarantees a solution that will make matters worse. Generational justice and distributive justice go hand in hand.

  19. Biostatistical analysis of quantitative immunofluorescence microscopy images.

    PubMed

    Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C

    2016-12-01

    Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  20. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

Top