Sample records for mode human error

  1. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. Investigating mode errors on automated flight decks: illustrating the problem-driven, cumulative, and interdisciplinary nature of human factors research.

    PubMed

    Sarter, Nadine

    2008-06-01

    The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.

  3. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  4. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  5. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  6. A Framework for Modeling Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  7. Prediction of human errors by maladaptive changes in event-related brain networks.

    PubMed

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus

    2008-04-22

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.

  8. Prediction of human errors by maladaptive changes in event-related brain networks

    PubMed Central

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123

  9. Combining task analysis and fault tree analysis for accident and incident analysis: a case study from Bulgaria.

    PubMed

    Doytchev, Doytchin E; Szwillus, Gerd

    2009-11-01

    Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.

  10. Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States

    DTIC Science & Technology

    2015-11-16

    a degraded visual environment, workload during the landing task begins to approach the limits of a human pilot’s capability. It is a similarly...Figure 2. Approach Trajectory ±4 ft landing error ±8 ft landing error ±12 ft landing error Flight Path -3000...heave and yaw axes. Figure 5. Open loop system generation ±4 ft landing error ±8 ft landing error ±12 ft landing error -10 -8 -6 -4 -2 0 2 4

  11. "First, know thyself": cognition and error in medicine.

    PubMed

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  12. 49 CFR Appendix C to Part 236 - Safety Assurance Criteria and Processes

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... system (all its elements including hardware and software) must be designed to assure safe operation with... unsafe errors in the software due to human error in the software specification, design, or coding phases... (hardware or software, or both) are used in combination to ensure safety. If a common mode failure exists...

  13. Testing boundary conditions for the conjunction fallacy: effects of response mode, conceptual focus, and problem type.

    PubMed

    Wedell, Douglas H; Moro, Rodrigo

    2008-04-01

    Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.

  14. The application of robotics to microlaryngeal laser surgery.

    PubMed

    Buckmire, Robert A; Wong, Yu-Tung; Deal, Allison M

    2015-06-01

    To evaluate the performance of human subjects, using a prototype robotic micromanipulator controller in a simulated, microlaryngeal operative setting. Observational cross-sectional study. Twenty-two human subjects with varying degrees of laser experience performed CO2 laser surgical tasks within a simulated microlaryngeal operative setting using an industry standard manual micromanipulator (MMM) and a prototype robotic micromanipulator controller (RMC). Accuracy, repeatability, and ablation consistency measures were obtained for each human subject across both conditions and for the preprogrammed RMC device. Using the standard MMM, surgeons with >10 previous laser cases performed superior to subjects with fewer cases on measures of error percentage and cumulative error (P = .045 and .03, respectively). No significant differences in performance were observed between subjects using the RMC device. In the programmed (P/A) mode, the RMC performed equivalently or superiorly to experienced human subjects on accuracy and repeatability measures, and nearly an order of magnitude better on measures of ablation consistency. The programmed RMC performed significantly better for repetition error when compared to human subjects with <100 previous laser cases (P = .04). Experienced laser surgeons perform better than novice surgeons on tasks of accuracy and repeatability using the MMM device but roughly equivalently using the novel RMC. Operated in the P/A mode, the RMC performs equivalently or superior to experienced laser surgeons using the industry standard MMM for all measured parameters, and delivers an ablation consistency nearly an order of magnitude better than human laser operators. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  15. Independent Oscillatory Patterns Determine Performance Fluctuations in Children with Attention Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Yordanova, Juliana; Albrecht, Bjorn; Uebel, Henrik; Kirov, Roumen; Banaschewski, Tobias; Rothenberger, Aribert; Kolev, Vasil

    2011-01-01

    The maintenance of stable goal-directed behaviour is a hallmark of conscious executive control in humans. Notably, both correct and error human actions may have a subconscious activation-based determination. One possible source of subconscious interference may be the default mode network that, in contrast to attentional network, manifests…

  16. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    PubMed

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. In Vivo Characterization of a Wireless Telemetry Module for a Capsule Endoscopy System Utilizing a Conformal Antenna.

    PubMed

    Faerber, Julia; Cummins, Gerard; Pavuluri, Sumanth Kumar; Record, Paul; Rodriguez, Adrian R Ayastuy; Lay, Holly S; McPhillips, Rachael; Cox, Benjamin F; Connor, Ciaran; Gregson, Rachael; Clutton, Richard Eddie; Khan, Sadeque Reza; Cochran, Sandy; Desmulliez, Marc P Y

    2018-02-01

    This paper describes the design, fabrication, packaging, and performance characterization of a conformal helix antenna created on the outside of a capsule endoscope designed to operate at a carrier frequency of 433 MHz within human tissue. Wireless data transfer was established between the integrated capsule system and an external receiver. The telemetry system was tested within a tissue phantom and in vivo porcine models. Two different types of transmission modes were tested. The first mode, replicating normal operating conditions, used data packets at a steady power level of 0 dBm, while the capsule was being withdrawn at a steady rate from the small intestine. The second mode, replicating the worst-case clinical scenario of capsule retention within the small bowel, sent data with stepwise increasing power levels of -10, 0, 6, and 10 dBm, with the capsule fixed in position. The temperature of the tissue surrounding the external antenna was monitored at all times using thermistors embedded within the capsule shell to observe potential safety issues. The recorded data showed, for both modes of operation, a low error transmission of 10 -3 packet error rate and 10 -5 bit error rate and no temperature increase of the tissue according to IEEE standards.

  18. Columbus safety and reliability

    NASA Astrophysics Data System (ADS)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  19. Application of auditory signals to the operation of an agricultural vehicle: results of pilot testing.

    PubMed

    Karimi, D; Mondor, T A; Mann, D D

    2008-01-01

    The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.

  20. Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus).

    PubMed

    Lyn, Heidi

    2007-10-01

    Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system.

  1. Risk analysis by FMEA as an element of analytical validation.

    PubMed

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  2. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  3. Application of failure mode and effect analysis in an assisted reproduction technology laboratory.

    PubMed

    Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola

    2016-08-01

    Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  4. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles.

    PubMed

    Eom, Hwisoo; Lee, Sang Hun

    2015-06-12

    A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model.

  5. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles

    PubMed Central

    Eom, Hwisoo; Lee, Sang Hun

    2015-01-01

    A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model. PMID:26076406

  6. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  7. Spatiotemporal Filtering Using Principal Component Analysis and Karhunen-Loeve Expansion Approaches for Regional GPS Network Analysis

    NASA Technical Reports Server (NTRS)

    Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.

    2006-01-01

    Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.

  8. Measurements of the toroidal torque balance of error field penetration locked modes

    DOE PAGES

    Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...

    2015-01-05

    Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less

  9. Measurement Error Calibration in Mixed-Mode Sample Surveys

    ERIC Educational Resources Information Center

    Buelens, Bart; van den Brakel, Jan A.

    2015-01-01

    Mixed-mode surveys are known to be susceptible to mode-dependent selection and measurement effects, collectively referred to as mode effects. The use of different data collection modes within the same survey may reduce selectivity of the overall response but is characterized by measurement errors differing across modes. Inference in sample surveys…

  10. SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, J; Kim, Y

    Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less

  11. Design and fabrication of a freeform phase plate for high-order ocular aberration correction

    NASA Astrophysics Data System (ADS)

    Yi, Allen Y.; Raasch, Thomas W.

    2005-11-01

    In recent years it has become possible to measure and in some instances to correct the high-order aberrations of human eyes. We have investigated the correction of wavefront error of human eyes by using phase plates designed to compensate for that error. The wavefront aberrations of the four eyes of two subjects were experimentally determined, and compensating phase plates were machined with an ultraprecision diamond-turning machine equipped with four independent axes. A slow-tool servo freeform trajectory was developed for the machine tool path. The machined phase-correction plates were measured and compared with the original design values to validate the process. The position of the phase-plate relative to the pupil is discussed. The practical utility of this mode of aberration correction was investigated with visual acuity testing. The results are consistent with the potential benefit of aberration correction but also underscore the critical positioning requirements of this mode of aberration correction. This process is described in detail from optical measurements, through machining process design and development, to final results.

  12. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  13. Characterization of identification errors and uses in localization of poor modal correlation

    NASA Astrophysics Data System (ADS)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components. Applied after removal of poor modal components, it provides spatial maps of poor correlation, which help localizing mode shape correlation errors and thus prepare the selection of model changes in updating procedures.

  14. Bridging Intuitive and Analytical Thinking: Four Looks at the 2-Glass Puzzle

    ERIC Educational Resources Information Center

    Ejersbo, Lisser Rye; Leron, Uri; Arcavi, Abraham

    2014-01-01

    The observation that the human mind operates in two distinct thinking modes--intuitive and analytical- have occupied psychological and educational researchers for several decades now. Much of this research has focused on the explanatory power of intuitive thinking as source of errors and misconceptions, but in this article, in contrast, we view…

  15. Dialysis Facility Safety: Processes and Opportunities.

    PubMed

    Garrick, Renee; Morey, Rishikesh

    2015-01-01

    Unintentional human errors are the source of most safety breaches in complex, high-risk environments. The environment of dialysis care is extremely complex. Dialysis patients have unique and changing physiology, and the processes required for their routine care involve numerous open-ended interfaces between providers and an assortment of technologically advanced equipment. Communication errors, both within the dialysis facility and during care transitions, and lapses in compliance with policies and procedures are frequent areas of safety risk. Some events, such as air emboli and needle dislodgments occur infrequently, but are serious risks. Other adverse events include medication errors, patient falls, catheter and access-related infections, access infiltrations and prolonged bleeding. A robust safety system should evaluate how multiple, sequential errors might align to cause harm. Systems of care can be improved by sharing the results of root cause analyses, and "good catches." Failure mode effects and analyses can be used to proactively identify and mitigate areas of highest risk, and methods drawn from cognitive psychology, simulation training, and human factor engineering can be used to advance facility safety. © 2015 Wiley Periodicals, Inc.

  16. Metering error quantification under voltage and current waveform distortion

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran

    2017-09-01

    With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.

  17. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    DOE PAGES

    Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.

    2015-01-23

    DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.

  18. Interface evaluation for soft robotic manipulators

    NASA Astrophysics Data System (ADS)

    Moore, Kristin S.; Rodes, William M.; Csencsits, Matthew A.; Kwoka, Martha J.; Gomer, Joshua A.; Pagano, Christopher C.

    2006-05-01

    The results of two usability experiments evaluating an interface for the operation of OctArm, a biologically inspired robotic arm modeled after an octopus tentacle, are reported. Due to the many degrees-of-freedom (DOF) for the operator to control, such 'continuum' robotic limbs provide unique challenges for human operators because they do not map intuitively. Two modes have been developed to control the arm and reduce the DOF under the explicit direction of the operator. In coupled velocity (CV) mode, a joystick controls changes in arm curvature. In end-effector (EE) mode, a joystick controls the arm by moving the position of an endpoint along a straight line. In Experiment 1, participants used the two modes to grasp objects placed at different locations in a virtual reality modeling language (VRML). Objective measures of performance and subjective preferences were recorded. Results revealed lower grasp times and a subjective preference for the CV mode. Recommendations for improving the interface included providing additional feedback and implementation of an error recovery function. In Experiment 2, only the CV mode was tested with improved training of participants and several changes to the interface. The error recovery function was implemented, allowing participants to reverse through previously attained positions. The mean time to complete the trials in the second usability test was reduced by more than 4 minutes compared with the first usability test, confirming the interface changes improved performance. The results of these tests will be incorporated into future versions of the arm and improve future usability tests.

  19. Quantification of airport community noise impact in terms of noise levels, population density, and human subjective response

    NASA Technical Reports Server (NTRS)

    Deloach, R.

    1981-01-01

    The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.

  20. A global perspective of the limits of prediction skill based on the ECMWF ensemble

    NASA Astrophysics Data System (ADS)

    Zagar, Nedjeljka

    2016-04-01

    In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.

  1. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  3. The Effect of Varying Jaw-elevator Muscle Forces on a Finite Element Model of a Human Cranium.

    PubMed

    Toro-Ibacache, Viviana; O'Higgins, Paul

    2016-07-01

    Finite element analyses simulating masticatory system loading are increasingly undertaken in primates, hominin fossils and modern humans. Simplifications of models and loadcases are often required given the limits of data and technology. One such area of uncertainty concerns the forces applied to cranial models and their sensitivity to variations in these forces. We assessed the effect of varying force magnitudes among jaw-elevator muscles applied to a finite element model of a human cranium. The model was loaded to simulate incisor and molar bites using different combinations of muscle forces. Symmetric, asymmetric, homogeneous, and heterogeneous muscle activations were simulated by scaling maximal forces. The effects were compared with respect to strain distribution (i.e., modes of deformation) and magnitudes; bite forces and temporomandibular joint (TMJ) reaction forces. Predicted modes of deformation, strain magnitudes and bite forces were directly proportional to total applied muscle force and relatively insensitive to the degree of heterogeneity of muscle activation. However, TMJ reaction forces and mandibular fossa strains decrease and increase on the balancing and working sides according to the degree of asymmetry of loading. These results indicate that when modes, rather than magnitudes, of facial deformation are of interest, errors in applied muscle forces have limited effects. However the degree of asymmetric loading does impact on TMJ reaction forces and mandibular fossa strains. These findings are of particular interest in relation to studies of skeletal and fossil material, where muscle data are not available and estimation of muscle forces from skeletal proxies is prone to error. Anat Rec, 299:828-839, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. A Formal Methods Approach to the Analysis of Mode Confusion

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.

    2004-01-01

    The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal models and analyses can be used to help eliminate mode confusion from flight deck designs and at the same time increase our confidence in the safety of the implementation. The paper is based upon interim results from a new project involving NASA Langley and Rockwell Collins in applying formal methods to a realistic business jet Flight Guidance System (FGS).

  5. Behavioural variation in 172 small-scale societies indicates that social learning is the main mode of human adaptation

    PubMed Central

    Mathew, Sarah; Perreault, Charles

    2015-01-01

    The behavioural variation among human societies is vast and unmatched in the animal world. It is unclear whether this variation is due to variation in the ecological environment or to differences in cultural traditions. Underlying this debate is a more fundamental question: is the richness of humans’ behavioural repertoire due to non-cultural mechanisms, such as causal reasoning, inventiveness, reaction norms, trial-and-error learning and evoked culture, or is it due to the population-level dynamics of cultural transmission? Here, we measure the relative contribution of environment and cultural history in explaining the behavioural variation of 172 Native American tribes at the time of European contact. We find that the effect of cultural history is typically larger than that of environment. Behaviours also persist over millennia within cultural lineages. This indicates that human behaviour is not predominantly determined by single-generation adaptive responses, contra theories that emphasize non-cultural mechanisms as determinants of human behaviour. Rather, the main mode of human adaptation is social learning mechanisms that operate over multiple generations. PMID:26085589

  6. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  7. Team play with a powerful and independent agent: a full-mission simulation study.

    PubMed

    Sarter, N B; Woods, D D

    2000-01-01

    One major problem with pilot-automation interaction on modern flight decks is a lack of mode awareness; that is, a lack of knowledge and understanding of the current and future status and behavior of the automation. A lack of mode awareness is not simply a pilot problem; rather, it is a symptom of a coordination breakdown between humans and machines. Recent changes in automation design can therefore be expected to have an impact on the nature of problems related to mode awareness. To examine how new automation properties might affect pilot-automation coordination, we performed a full-mission simulation study on one of the most advanced automated aircraft, the Airbus A-320. The results of this work indicate that mode errors and "automation surprises" still occur on these advanced aircraft. However, there appear to be more opportunities for delayed or missing interventions with undesirable system activities, possibly because of higher system autonomy and coupling.

  8. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system failures and anomalies of avionic systems are also incorporated. The resultant model helps simulate the emergence of automation-related issues in today's modern airliners from a top-down, generalized approach, which serves as a platform to evaluate NASA developed technologies

  9. Modeling of the Mode S tracking system in support of aircraft safety research

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1982-01-01

    This report collects, documents, and models data relating the expected accuracies of tracking variables to be obtained from the FAA's Mode S Secondary Surveillance Radar system. The data include measured range and azimuth to the tracked aircraft plus the encoded altitude transmitted via the Mode S data link. A brief summary is made of the Mode S system status and its potential applications for aircraft safety improvement including accident analysis. FAA flight test results are presented demonstrating Mode S range and azimuth accuracy and error characteristics and comparing Mode S to the current ATCRBS radar tracking system. Data are also presented that describe the expected accuracy and error characteristics of encoded altitude. These data are used to formulate mathematical error models of the Mode S variables and encoded altitude. A brief analytical assessment is made of the real-time tracking accuracy available from using Mode S and how it could be improved with down-linked velocity.

  10. The SACADA database for human reliability and human performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y. James Chang; Dennis Bley; Lawrence Criscione

    2014-05-01

    Lack of appropriate and sufficient human performance data has been identified as a key factor affecting human reliability analysis (HRA) quality especially in the estimation of human error probability (HEP). The Scenario Authoring, Characterization, and Debriefing Application (SACADA) database was developed by the U.S. Nuclear Regulatory Commission (NRC) to address this data need. An agreement between NRC and the South Texas Project Nuclear Operating Company (STPNOC) was established to support the SACADA development with aims to make the SACADA tool suitable for implementation in the nuclear power plants' operator training program to collect operator performance information. The collected data wouldmore » support the STPNOC's operator training program and be shared with the NRC for improving HRA quality. This paper discusses the SACADA data taxonomy, the theoretical foundation, the prospective data to be generated from the SACADA raw data to inform human reliability and human performance, and the considerations on the use of simulator data for HRA. Each SACADA data point consists of two information segments: context and performance results. Context is a characterization of the performance challenges to task success. The performance results are the results of performing the task. The data taxonomy uses a macrocognitive functions model for the framework. At a high level, information is classified according to the macrocognitive functions of detecting the plant abnormality, understanding the abnormality, deciding the response plan, executing the response plan, and team related aspects (i.e., communication, teamwork, and supervision). The data are expected to be useful for analyzing the relations between context, error modes and error causes in human performance.« less

  11. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  12. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  13. SU-F-T-243: Major Risks in Radiotherapy. A Review Based On Risk Analysis Literature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López-Tarjuelo, J; Guasp-Tortajada, M; Iglesias-Montenegro, N

    Purpose: We present a literature review of risk analyses in radiotherapy to highlight the most reported risks and facilitate the spread of this valuable information so that professionals can be aware of these major threats before performing their own studies. Methods: We considered studies with at least an estimation of the probability of occurrence of an adverse event (O) and its associated severity (S). They cover external beam radiotherapy, brachytherapy, intraoperative radiotherapy, and stereotactic techniques. We selected only the works containing a detailed ranked series of elements or failure modes and focused on the first fully reported quartile as much.more » Afterward, we sorted the risk elements according to a regular radiotherapy procedure so that the resulting groups were cited in several works and be ranked in this way. Results: 29 references published between 2007 and February 2016 were studied. Publication trend has been generally rising. The most employed analysis has been the Failure mode and effect analysis (FMEA). Among references, we selected 20 works listing 258 ranked risk elements. They were sorted into 31 groups appearing at least in two different works. 11 groups appeared in at least 5 references and 5 groups did it in 7 or more papers. These last sets of risks where choosing another set of images or plan for planning or treating, errors related with contours, errors in patient positioning for treatment, human mistakes when programming treatments, and planning errors. Conclusion: There is a sufficient amount and variety of references for identifying which failure modes or elements should be addressed in a radiotherapy department before attempting a specific analysis. FMEA prevailed, but other studies such as “risk matrix” or “occurrence × severity” analyses can also lead professionals’ efforts. Risk associated with human actions ranks very high; therefore, they should be automated or at least peer-reviewed.« less

  14. Exponential error reduction in pretransfusion testing with automation.

    PubMed

    South, Susan F; Casina, Tony S; Li, Lily

    2012-08-01

    Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.

  15. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less

  16. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.

    PubMed

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-06-01

    Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.

  17. Behavioural variation in 172 small-scale societies indicates that social learning is the main mode of human adaptation.

    PubMed

    Mathew, Sarah; Perreault, Charles

    2015-07-07

    The behavioural variation among human societies is vast and unmatched in the animal world. It is unclear whether this variation is due to variation in the ecological environment or to differences in cultural traditions. Underlying this debate is a more fundamental question: is the richness of humans' behavioural repertoire due to non-cultural mechanisms, such as causal reasoning, inventiveness, reaction norms, trial-and-error learning and evoked culture, or is it due to the population-level dynamics of cultural transmission? Here, we measure the relative contribution of environment and cultural history in explaining the behavioural variation of 172 Native American tribes at the time of European contact. We find that the effect of cultural history is typically larger than that of environment. Behaviours also persist over millennia within cultural lineages. This indicates that human behaviour is not predominantly determined by single-generation adaptive responses, contra theories that emphasize non-cultural mechanisms as determinants of human behaviour. Rather, the main mode of human adaptation is social learning mechanisms that operate over multiple generations. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump

    NASA Astrophysics Data System (ADS)

    Gontcharov, G. A.

    2017-08-01

    Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.

  19. Mitigating leakage errors due to cavity modes in a superconducting quantum computer

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.

    2018-07-01

    A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.

  20. Modal Correction Method For Dynamically Induced Errors In Wind-Tunnel Model Attitude Measurements

    NASA Technical Reports Server (NTRS)

    Buehrle, R. D.; Young, C. P., Jr.

    1995-01-01

    This paper describes a method for correcting the dynamically induced bias errors in wind tunnel model attitude measurements using measured modal properties of the model system. At NASA Langley Research Center, the predominant instrumentation used to measure model attitude is a servo-accelerometer device that senses the model attitude with respect to the local vertical. Under smooth wind tunnel operating conditions, this inertial device can measure the model attitude with an accuracy of 0.01 degree. During wind tunnel tests when the model is responding at high dynamic amplitudes, the inertial device also senses the centrifugal acceleration associated with model vibration. This centrifugal acceleration results in a bias error in the model attitude measurement. A study of the response of a cantilevered model system to a simulated dynamic environment shows significant bias error in the model attitude measurement can occur and is vibration mode and amplitude dependent. For each vibration mode contributing to the bias error, the error is estimated from the measured modal properties and tangential accelerations at the model attitude device. Linear superposition is used to combine the bias estimates for individual modes to determine the overall bias error as a function of time. The modal correction model predicts the bias error to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment.

  1. Human Factors Process Task Analysis Liquid Oxygen Pump Acceptance Test Procedure for the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.

    2002-01-01

    A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.

  2. Effects of model error on control of large flexible space antenna with comparisons of decoupled and linear quadratic regulator control procedures

    NASA Technical Reports Server (NTRS)

    Hamer, H. A.; Johnson, K. G.

    1986-01-01

    An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.

  3. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  4. The role of the emergency medical dispatch centre (EMDC) and prehospital emergency care safety: results from an incident report (IR) system.

    PubMed

    Mortaro, Alberto; Pascu, Diana; Zerman, Tamara; Vallaperta, Enrico; Schönsberg, Alberto; Tardivo, Stefano; Pancheri, Serena; Romano, Gabriele; Moretti, Francesca

    2015-07-01

    The role of the emergency medical dispatch centre (EMDC) is essential to ensure coordinated and safe prehospital care. The aim of this study was to implement an incident report (IR) system in prehospital emergency care management with a view to detecting errors occurring in this setting and guiding the implementation of safety improvement initiatives. An ad hoc IR form for the prehospital setting was developed and implemented within the EMDC of Verona. The form included six phases (from the emergency call to hospital admission) with the relevant list of potential error modes (30 items). This descriptive observational study considered the results from 268 consecutive days between February and November 2010. During the study period, 161 error modes were detected. The majority of these errors occurred in the resource allocation and timing phase (34.2%) and in the dispatch phase (31.0%). Most of the errors were due to human factors (77.6%), and almost half of them were classified as either moderate (27.9%) or severe (19.9%). These results guided the implementation of specific corrective actions, such as the adoption of a more efficient Medical Priority Dispatch System and the development of educational initiatives targeted at both EMDC staff and the population. Despite the intrinsic limits of IR methodology, results suggest how the implementation of an IR system dedicated to the emergency prehospital setting can act as a major driver for the development of a "learning organization" and improve both efficacy and safety of first aid care.

  5. Cross-mode bioelectrical impedance analysis in a standing position for estimating fat-free mass validated against dual-energy x-ray absorptiometry.

    PubMed

    Huang, Ai-Chun; Chen, Yu-Yawn; Chuang, Chih-Lin; Chiang, Li-Ming; Lu, Hsueh-Kuan; Lin, Hung-Chi; Chen, Kuen-Tsann; Hsiao, An-Chi; Hsieh, Kuen-Chang

    2015-11-01

    Bioelectrical impedance analysis (BIA) is commonly used to assess body composition. Cross-mode (left hand to right foot, Z(CR)) BIA presumably uses the longest current path in the human body, which may generate better results when estimating fat-free mass (FFM). We compared the cross-mode with the hand-to-foot mode (right hand to right foot, Z(HF)) using dual-energy x-ray absorptiometry (DXA) as the reference. We hypothesized that when comparing anthropometric parameters using stepwise regression analysis, the impedance value from the cross-mode analysis would have better prediction accuracy than that from the hand-to-foot mode analysis. We studied 264 men and 232 women (mean ages, 32.19 ± 14.95 and 34.51 ± 14.96 years, respectively; mean body mass indexes, 24.54 ± 3.74 and 23.44 ± 4.61 kg/m2, respectively). The DXA-measured FFMs in men and women were 58.85 ± 8.15 and 40.48 ± 5.64 kg, respectively. Multiple stepwise linear regression analyses were performed to construct sex-specific FFM equations. The correlations of FFM measured by DXA vs. FFM from hand-to-foot mode and estimated FFM by cross-mode were 0.85 and 0.86 in women, with standard errors of estimate of 2.96 and 2.92 kg, respectively. In men, they were 0.91 and 0.91, with standard errors of the estimates of 3.34 and 3.48 kg, respectively. Bland-Altman plots showed limits of agreement of -6.78 to 6.78 kg for FFM from hand-to-foot mode and -7.06 to 7.06 kg for estimated FFM by cross-mode for men, and -5.91 to 5.91 and -5.84 to 5.84 kg, respectively, for women. Paired t tests showed no significant differences between the 2 modes (P > .05). Hence, cross-mode BIA appears to represent a reasonable and practical application for assessing FFM in Chinese populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Locked-mode avoidance and recovery without momentum input

    NASA Astrophysics Data System (ADS)

    Delgado-Aparicio, L.; Rice, J. E.; Wolfe, S.; Cziegler, I.; Gao, C.; Granetz, R.; Wukitch, S.; Terry, J.; Greenwald, M.; Sugiyama, L.; Hubbard, A.; Hugges, J.; Marmar, E.; Phillips, P.; Rowan, W.

    2015-11-01

    Error-field-induced locked-modes (LMs) have been studied in Alcator C-Mod at ITER-Bϕ, without NBI fueling and momentum input. Delay of the mode-onset and locked-mode recovery has been successfully obtained without external momentum input using Ion Cyclotron Resonance Heating (ICRH). The use of external heating in-sync with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW, which demonstrates the existence of a power threshold to ``unlock'' the mode; in the presence of an error field the L-mode discharge can transition into H-mode only when PICRH > 2 MW and at high densities, avoiding also the density pump-out. The effects of ion heating observed on unlocking the core plasma may be due to ICRH induced flows in the plasma boundary, or modifications of plasma profiles that changed the underlying turbulence. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT, DE-FG03-96ER-54373 at University of Texas at Austin, and DE-AC02-09CH11466 at PPPL.

  7. Error field detection in DIII-D by magnetic steering of locked modes

    DOE PAGES

    Shiraki, Daisuke; La Haye, Robert J.; Logan, Nikolas C.; ...

    2014-02-20

    Optimal correction coil currents for the n = 1 intrinsic error field of the DIII-D tokamak are inferred by applying a rotating external magnetic perturbation to steer the phase of a saturated locked mode with poloidal/toroidal mode number m/n = 2/1. The error field is detected non-disruptively in a single discharge, based on the toroidal torque balance of the resonant surface, which is assumed to be dominated by the balance of resonant electromagnetic torques. This is equivalent to the island being locked at all times to the resonant 2/1 component of the total of the applied and intrinsic error fields,more » such that the deviation of the locked mode phase from the applied field phase depends on the existing error field. The optimal set of correction coil currents is determined to be those currents which best cancels the torque from the error field, based on fitting of the torque balance model. The toroidal electromagnetic torques are calculated from experimental data using a simplified approach incorporating realistic DIII-D geometry, and including the effect of the plasma response on island torque balance based on the ideal plasma response to external fields. This method of error field detection is demonstrated in DIII-D discharges, and the results are compared with those based on the onset of low-density locked modes in ohmic plasmas. Furthermore, this magnetic steering technique presents an efficient approach to error field detection and is a promising method for ITER, particularly during initial operation when the lack of auxiliary heating systems makes established techniques based on rotation or plasma amplification unsuitable.« less

  8. Safety Analysis of FMS/CTAS Interactions During Aircraft Arrivals

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.

    1998-01-01

    This grant funded research on human-computer interaction design and analysis techniques, using future ATC environments as a testbed. The basic approach was to model the nominal behavior of both the automated and human procedures and then to apply safety analysis techniques to these models. Our previous modeling language, RSML, had been used to specify the system requirements for TCAS II for the FAA. Using the lessons learned from this experience, we designed a new modeling language that (among other things) incorporates features to assist in designing less error-prone human-computer interactions and interfaces and in detecting potential HCI problems, such as mode confusion. The new language, SpecTRM-RL, uses "intent" abstractions, based on Rasmussen's abstraction hierarchy, and includes both informal (English and graphical) specifications and formal, executable models for specifying various aspects of the system. One of the goals for our language was to highlight the system modes and mode changes to assist in identifying the potential for mode confusion. Three published papers resulted from this research. The first builds on the work of Degani on mode confusion to identify aspects of the system design that could lead to potential hazards. We defined and modeled modes differently than Degani and also defined design criteria for SpecTRM-RL models. Our design criteria include the Degani criteria but extend them to include more potential problems. In a second paper, Leveson and Palmer showed how the criteria for indirect mode transitions could be applied to a mode confusion problem found in several ASRS reports for the MD-88. In addition, we defined a visual task modeling language that can be used by system designers to model human-computer interaction. The visual models can be translated into SpecTRM-RL models, and then the SpecTRM-RL suite of analysis tools can be used to perform formal and informal safety analyses on the task model in isolation or integrated with the rest of the modeled system. We had hoped to be able to apply these modeling languages and analysis tools to a TAP air/ground trajectory negotiation scenario, but the development of the tools took more time than we anticipated.

  9. Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.

    PubMed

    Patel, Santosh; Loveridge, Robert

    2015-12-01

    Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.

  10. The importance of matched poloidal spectra to error field correction in DIII-D

    DOE PAGES

    Paz-Soldan, Carlos; Lanctot, Matthew J.; Logan, Nikolas C.; ...

    2014-07-09

    Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable "dominant" mode of the plasma is nulled at each toroidal mode number ( n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios the plasma is found to be respectively 7x and 20x less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated,more » any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Furthermore, despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.« less

  11. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  12. Human Factors Risk Analyses of a Doffing Protocol for Ebola-Level Personal Protective Equipment: Mapping Errors to Contamination.

    PubMed

    Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T

    2018-03-05

    Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.

  13. The Efffect of Image Apodization on Global Mode Parameters and Rotational Inversions

    NASA Astrophysics Data System (ADS)

    Larson, Tim; Schou, Jesper

    2016-10-01

    It has long been known that certain systematic errors in the global mode analysis of data from both MDI and HMI depend on how the input images were apodized. Recently it has come to light, while investigating a six-month period in f-mode frequencies, that mode coverage is highest when B0 is maximal. Recalling that the leakage matrix is calculated in the approximation that B0=0, it comes as a surprise that more modes are fitted when the leakage matrix is most incorrect. It is now believed that the six-month oscillation has primarily to do with what portion of the solar surface is visible. Other systematic errors that depend on the part of the disk used include high-latitude anomalies in the rotation rate and a prominent feature in the normalized residuals of odd a-coefficients. Although the most likely cause of all these errors is errors in the leakage matrix, extensive recalculation of the leaks has not made any difference. Thus we conjecture that another effect may be at play, such as errors in the noise model or one that has to do with the alignment of the apodization with the spherical harmonics. In this poster we explore how differently shaped apodizations affect the results of inversions for internal rotation, for both maximal and minimal absolute values of B0.

  14. Error field assessment from driven rotation of stable external kinks at EXTRAP-T2R reversed field pinch

    NASA Astrophysics Data System (ADS)

    Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.

    2013-04-01

    A new non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the EXTRAP-T2R reversed field pinch. Stable and marginally stable external kink modes of toroidal mode number n = 10 and n = 8, respectively, were generated, and their rotation sustained, by means of rotating magnetic perturbations of the same n. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the kink modes were observed to rotate non-uniformly and be modulated in amplitude. This behaviour was used to precisely infer the amplitude and approximately estimate the toroidal phase of the EF. A subsequent scan permitted to optimize the toroidal phase. The technique was tested against deliberately applied as well as intrinsic EFs of n = 8 and 10. Corrections equal and opposite to the estimated error fields were applied. The efficacy of the error compensation was indicated by the increased discharge duration and more uniform mode rotation in response to a uniformly rotating perturbation. The results are in good agreement with theory, and the extension to lower n, to tearing modes and to tokamaks, including ITER, is discussed.

  15. Polarization-insensitive PAM-4-carrying free-space orbital angular momentum (OAM) communications.

    PubMed

    Liu, Jun; Wang, Jian

    2016-02-22

    We present a simple configuration incorporating single polarization-sensitive phase-only liquid crystal spatial light modulator (SLM) to facilitate polarization-insensitive free-space optical communications employing orbital angular momentum (OAM) modes. We experimentally demonstrate several polarization-insensitive optical communication subsystems by propagating a single OAM mode, multicasting 4 and 10 OAM modes, and multiplexing 8 OAM modes, respectively. Free-space polarization-insensitive optical communication links using OAM modes that carry four-level pulse-amplitude modulation (PAM-4) signal are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties are less than 1 dB in both polarization-insensitive N-fold OAM modes multicasting and multiple OAM modes multiplexing at a bit-error rate (BER) of 2e-3 (enhanced forward-error correction (EFEC) threshold).

  16. Follow on Researches for X-56A Aircraft at NASA Dryden Flight Research Center (Progress Report)

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2012-01-01

    A lot of composite materials are used for the modern aircraft to reduce its weight. Aircraft aeroservoelastic models are typically characterized by significant levels of model parameter uncertainty due to composite manufacturing process. Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of X-56A aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes is based on the flutter analysis of X-56A aircraft. It should be noted that for all three Mach number cases rigid body modes and mode numbers seven and nine are participated 89.1 92.4 % of the first flutter mode. Modal participation of the rigid body mode and mode numbers seven and nine for the second flutter mode are 94.6 96.4%. Rigid body mode and the first two anti-symmetric modes, eighth and tenth modes, are participated 93.2 94.6% of the third flutter mode. Therefore, rigid body modes and the first four flexible modes of X-56A aircraft are the primary modes during the model tuning procedure. The ground vibration test-validated structural dynamic finite element model of the X-56A aircraft is to obtain in this study. The structural dynamics finite element model of X-56A aircraft is improved using the parallelized big-bang big-crunch algorithm together with a hybrid optimization technique.

  17. Prevention of medication errors: detection and audit.

    PubMed

    Montesi, Germana; Lechi, Alessandro

    2009-06-01

    1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.

  18. [Failure modes and effects analysis in the prescription, validation and dispensing process].

    PubMed

    Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T

    2012-01-01

    To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  19. Referral Coordination in the Next TRICARE Contract Environment: A Case Study Applying Failure Mode Effects Analysis

    DTIC Science & Technology

    2004-06-13

    antiquity. Plutarch is credited for saying in Morals--Against Colotes the Epicurean, "For to err in opinion, though it be not the part of wise men, it is at...least human" ( Plutarch , AD 110). Of the 5 definitions for error given in Merriam-Webster’s Collegiate Dictionary, the third one listed "an act that...Identifying and managing inappropriate hospital utilization: A policy synthesis. Health Services Research, 22(5), 710-57. Plutarch . (AD 110) . Worldofquotes

  20. A fuzzy logic-based model for noise control at industrial workplaces.

    PubMed

    Aluclu, I; Dalgic, A; Toprak, Z F

    2008-05-01

    Ergonomics is a broad science encompassing the wide variety of working conditions that can affect worker comfort and health, including factors such as lighting, noise, temperature, vibration, workstation design, tool design, machine design, etc. This paper describes noise-human response and a fuzzy logic model developed by comprehensive field studies on noise measurements (including atmospheric parameters) and control measures. The model has two subsystems constructed on noise reduction quantity in dB. The first subsystem of the fuzzy model depending on 549 linguistic rules comprises acoustical features of all materials used in any workplace. Totally 984 patterns were used, 503 patterns for model development and the rest 481 patterns for testing the model. The second subsystem deals with atmospheric parameter interactions with noise and has 52 linguistic rules. Similarly, 94 field patterns were obtained; 68 patterns were used for training stage of the model and the rest 26 patterns for testing the model. These rules were determined by taking into consideration formal standards, experiences of specialists and the measurements patterns. The results of the model were compared with various statistics (correlation coefficients, max-min, standard deviation, average and coefficient of skewness) and error modes (root mean square error and relative error). The correlation coefficients were significantly high, error modes were quite low and the other statistics were very close to the data. This statement indicates the validity of the model. Therefore, the model can be used for noise control in any workplace and helpful to the designer in planning stage of a workplace.

  1. 4 × 20 Gbit/s mode division multiplexing over free space using vector modes and a q-plate mode (de)multiplexer

    NASA Astrophysics Data System (ADS)

    Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.

    2015-05-01

    Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.

  2. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    NASA Astrophysics Data System (ADS)

    La Haye, R. J.; Paz-Soldan, C.; Strait, E. J.

    2015-02-01

    DIII-D experiments show that fully penetrated resonant n = 1 error field locked modes in ohmic plasmas with safety factor q95 ≳ 3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n = 2/1) static error fields are shielded in ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption. Error field correction (EFC) is performed on DIII-D (in ITER relevant shape and safety factor q95 ≳ 3) with either the n = 1 C-coil (no handedness) or the n = 1 I-coil (with ‘dominantly’ resonant field pitch). Despite EFC, which allows significantly lower plasma density (a ‘figure of merit’) before penetration occurs, the resulting saturated islands have similar large size; they differ only in the phase of the locked mode after typically being pulled (by up to 30° toroidally) in the electron diamagnetic drift direction as they grow to saturation. Island amplification and phase shift are explained by a second change-of-state in which the classical tearing index changes from stable to marginal by the presence of the island, which changes the current density profile. The eventual island size is thus governed by the inherent stability and saturation mechanism rather than the driving error field.

  3. Effects of Heavy Ion Exposure on Nanocrystal Nonvolatile Memory

    NASA Technical Reports Server (NTRS)

    Oldham, Timothy R.; Suhail, Mohammed; Kuhn, Peter; Prinz, Erwin; Kim, Hak; LaBel, Kenneth A.

    2004-01-01

    We have irradiated engineering samples of Freescale 4M nonvolatile memories with heavy ions. They use Silicon nanocrystals as the storage element, rather than the more common floating gate. The irradiations were performed using the Texas A&M University cyclotron Single Event Effects Test Facility. The chips were tested in the static mode, and in the dynamic read mode, dynamic write (program) mode, and dynamic erase mode. All the errors observed appeared to be due to single, isolated bits, even in the program and erase modes. These errors appeared to be related to the micro-dose mechanism. All the errors corresponded to the loss of electrons from a programmed cell. The underlying physical mechanisms will be discussed in more detail later. There were no errors, which could be attributed to malfunctions of the control circuits. At the highest LET used in the test (85 MeV/mg/sq cm), however, there appeared to be a failure due to gate rupture. Failure analysis is being conducted to confirm this conclusion. There was no unambiguous evidence of latchup under any test conditions. Generally, the results on the nanocrystal technology compare favorably with results on currently available commercial floating gate technology, indicating that the technology is promising for future space applications, both civilian and military.

  4. Performance of GPS-devices for environmental exposure assessment.

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Huss, Anke; Vermeulen, Roel

    2013-01-01

    Integration of individual time-location patterns with spatially resolved exposure maps enables a more accurate estimation of personal exposures to environmental pollutants than using estimates at fixed locations. Current global positioning system (GPS) devices can be used to track an individual's location. However, information on GPS-performance in environmental exposure assessment is largely missing. We therefore performed two studies. First, a commute-study, where the commute of 12 individuals was tracked twice, testing GPS-performance for five transport modes and two wearing modes. Second, an urban-tracking study, where one individual was tracked repeatedly through different areas, focused on the effect of building obstruction on GPS-performance. The median error from the true path for walking was 3.7 m, biking 2.9 m, train 4.8 m, bus 4.9 m, and car 3.3 m. Errors were larger in a high-rise commercial area (median error=7.1 m) compared with a low-rise residential area (median error=2.2 m). Thus, GPS-performance largely depends on the transport mode and urban built-up. Although ~85% of all errors were <10 m, almost 1% of the errors were >50 m. Modern GPS-devices are useful tools for environmental exposure assessment, but large GPS-errors might affect estimates of exposures with high spatial variability.

  5. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  6. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    PubMed

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  7. Human factors of the high technology cockpit

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1990-01-01

    The rapid advance of cockpit automation in the last decade has outstripped the ability of the human factors profession to understand the changes in human functions required. High technology cockpits require less physical (observable) workload, but are highly demanding of cognitive functions such as planning, alternative selection, and monitoring. Furthermore, automation creates opportunity for new and more serious forms of human error, and many pilots are concerned about the possibility of complacency affecting their performance. On the positive side, the equipment works as advertized with high reliability, offering highly efficient, computer-based flight. These findings from the cockpit studies probably apply equally to other industries, such as nuclear power production, other modes of transportation, medicine, and manufacturing, all of which traditionally have looked to aviation for technological leadership. The challenge to the human factors profession is to aid designers, operators, and training departments in exploiting the positive side of automation, while seeking solutions to the negative side. Viewgraphs are given.

  8. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning

    PubMed Central

    McGregor, Heather R.; Mohatarem, Ayman

    2017-01-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634

  9. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.

    PubMed

    Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L

    2017-07-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.

  10. Solar Array Disturbances to Spacecraft Pointing During the Lunar Reconnaissance Orbiter (LRO) Mission

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip

    2010-01-01

    The Lunar Reconnaissance Orbiter (LRO), the first spacecraft to support NASA s return to the Moon, launched on June 18, 2009 from the Cape Canaveral Air Force Station aboard an Atlas V launch vehicle. It was initially inserted into a direct trans-lunar trajectory to the Moon. After a five day transit to the Moon, LRO was inserted into the Lunar orbit and successfully lowered to a low altitude elliptical polar orbit for spacecraft commissioning. Successful commissioning was completed in October 2009 when LRO was placed in its near circular mission orbit with an approximate altitude of 50km. LRO will spend at least one year orbiting the Moon, collecting lunar environment science and mapping data, utilizing a suite of seven instruments to enable future human exploration. The objective is to provide key science data necessary to facilitate human return to the Moon as well as identification of opportunities for future science missions. LRO's instrument suite will provide the high resolution imaging data with sub-meter accuracy, highly accurate lunar cartographic maps, mineralogy mapping, amongst other science data of interest. LRO employs a 3-axis stabilized attitude control system (ACS) whose primary control mode, the "Observing Mode", provides Lunar nadir, off-nadir, and inertial fine pointing for the science data collection and instrument calibration. This controller combines the capability of fine pointing with on-demand large angle full-sky attitude reorientation. It provides simplicity of spacecraft operation as well as additional flexibility for science data collection. A conventional suite of ACS components is employed in the Observing Mode to meet the pointing and control objectives. Actuation is provided by a set of four reaction wheels developed in-house at NASA Goddard Space Flight Center (GSFC). Attitude feedback is provided by a six state Kalman filter which utilizes two SELEX Galileo Star Trackers for attitude updates, and a single Honeywell Miniature Inertial Measurement Unit (MIMU) to provide body rates for attitude propagation. Rate is computed by differentiating accumulated angle provided by the MIMU. The Observing Mode controller is required to maintain fine pointing while a large fully-articulated solar array (SA) maintains its panel normal to the solar incidence. This paper describes the disturbances to the attitude control resulting from the SA articulation. Observing Mode performance in the presence of this disturbance was assessed while the spacecraft was in an initial elliptical low altitude orbit during the commissioning phase, which started about two weeks after launch and lasted for 90 days. LRO demonstrated excellent pointing performance during Observing Mode nadir and inertial attitude target operations during this phase. Transient LRO attitude errors observed during commissioning resulted primarily from three sources, Diviner instrument calibrations, RW zero crossings, and SA articulation. Even during times of considerable disturbance from SA articulation, the attitude errors were maintained below the statistical attitude error requirement level of 15 arc-sec (3 sigma).

  11. Estimating alarm thresholds and the number of components in mixture distributions

    NASA Astrophysics Data System (ADS)

    Burr, Tom; Hamada, Michael S.

    2012-09-01

    Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.

  12. Orbit determination of highly elliptical Earth orbiters using improved Doppler data-processing modes

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.

    1995-01-01

    A navigation error covariance analysis of four highly elliptical Earth orbits is described, with apogee heights ranging from 20,000 to 76,800 km and perigee heights ranging from 1,000 to 5,000 km. This analysis differs from earlier studies in that improved navigation data-processing modes were used to reduce the radio metric data. For this study, X-band (8.4-GHz) Doppler data were assumed to be acquired from two Deep Space Network radio antennas and reconstructed orbit errors propagated over a single day. Doppler measurements were formulated as total-count phase measurements and compared to the traditional formulation of differenced-count frequency measurements. In addition, an enhanced data-filtering strategy was used, which treated the principal ground system calibration errors affecting the data as filter parameters. Results suggest that a 40- to 60-percent accuracy improvement may be achievable over traditional data-processing modes in reconstructed orbit errors, with a substantial reduction in reconstructed velocity errors at perigee. Historically, this has been a regime in which stringent navigation requirements have been difficult to meet by conventional methods.

  13. Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.

    PubMed

    Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne

    2015-05-01

    To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.

  14. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar

    PubMed Central

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-01-01

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power. PMID:27483261

  15. Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.

    PubMed

    Hong, Hong; Zhao, Heng; Peng, Zhengyu; Li, Hui; Gu, Chen; Li, Changzhi; Zhu, Xiaohua

    2016-07-28

    Time-varying vocal folds vibration information is of crucial importance in speech processing, and the traditional devices to acquire speech signals are easily smeared by the high background noise and voice interference. In this paper, we present a non-acoustic way to capture the human vocal folds vibration using a 24-GHz portable auditory radar. Since the vocal folds vibration only reaches several millimeters, the high operating frequency and the 4 × 4 array antennas are applied to achieve the high sensitivity. The Variational Mode Decomposition (VMD) based algorithm is proposed to decompose the radar-detected auditory signal into a sequence of intrinsic modes firstly, and then, extract the time-varying vocal folds vibration frequency from the corresponding mode. Feasibility demonstration, evaluation, and comparison are conducted with tonal and non-tonal languages, and the low relative errors show a high consistency between the radar-detected auditory time-varying vocal folds vibration and acoustic fundamental frequency, except that the auditory radar significantly improves the frequency-resolving power.

  16. Locomotion mode identification for lower limbs using neuromuscular and joint kinematic signals.

    PubMed

    Afzal, Taimoor; White, Gannon; Wright, Andrew B; Iqbal, Kamran

    2014-01-01

    Recent development in lower limb prosthetics has seen an emergence of powered prosthesis that have the capability to operate in different locomotion modes. However, these devices cannot transition seamlessly between modes such as level walking, stair ascent and descent and up slope and down slope walking. They require some form of user input that defines the human intent. The purpose of this study was to develop a locomotion mode detection system and evaluate its performance for different sensor configurations and to study the effect of locomotion mode detection with and without electromyography (EMG) signals while using kinematic data from hip joint of non-dominant/impaired limb and an accelerometer. Data was collected from four able bodied subjects that completed two circuits that contained standing, level-walking, ramp ascent and descent and stair ascent and descent. By using only the kinematic data from the hip joint and accelerometer data the system was able to identify the transitions, stance and swing phases with similar performance as compared to using only EMG and accelerometer data. However, significant improvement in classification error was observed when EMG, kinematic and accelerometer data were used together to identify the locomotion modes. The higher recognition rates when using the kinematic data along with EMG shows that the joint kinematics could be beneficial in intent recognition systems of locomotion modes.

  17. Independent oscillatory patterns determine performance fluctuations in children with attention deficit/hyperactivity disorder.

    PubMed

    Yordanova, Juliana; Albrecht, Björn; Uebel, Henrik; Kirov, Roumen; Banaschewski, Tobias; Rothenberger, Aribert; Kolev, Vasil

    2011-06-01

    The maintenance of stable goal-directed behaviour is a hallmark of conscious executive control in humans. Notably, both correct and error human actions may have a subconscious activation-based determination. One possible source of subconscious interference may be the default mode network that, in contrast to attentional network, manifests intrinsic oscillations at very low (<0.1 Hz) frequencies. In the present study, we analyse the time dynamics of performance accuracy to search for multisecond periodic fluctuations of error occurrence. Attentional lapses in attention deficit/hyperactivity disorder are proposed to originate from interferences from intrinsically oscillating networks. Identifying periodic error fluctuations with a frequency<0.1 Hz in patients with attention deficit/hyperactivity disorder would provide a behavioural evidence for such interferences. Performance was monitored during a visual flanker task in 92 children (7- to 16-year olds), 47 with attention deficit/hyperactivity disorder, combined type and 45 healthy controls. Using an original approach, the time distribution of error occurrence was analysed in the frequency and time-frequency domains in order to detect rhythmic periodicity. Major results demonstrate that in both patients and controls, error behaviour was characterized by multisecond rhythmic fluctuations with a period of ∼12 s, appearing with a delay after transition to task. Only in attention deficit/hyperactivity disorder, was there an additional 'pathological' oscillation of error generation, which determined periodic drops of performance accuracy each 20-30 s. Thus, in patients, periodic error fluctuations were modulated by two independent oscillatory patterns. The findings demonstrate that: (i) attentive behaviour of children is determined by multisecond regularities; and (ii) a unique additional periodicity guides performance fluctuations in patients. These observations may re-conceptualize the understanding of attentive behaviour beyond the executive top-down control and may reveal new origins of psychopathological behaviours in attention deficit/hyperactivity disorder.

  18. Design Consideration and Performance of Networked Narrowband Waveforms for Tactical Communications

    DTIC Science & Technology

    2010-09-01

    four proposed CPM modes, with perfect acquisition parameters, for both coherent and noncoherent detection using an iterative receiver with both inner...Figure 1: Bit error rate performance of various CPM modes with coherent and noncoherent detection. Figure 3 shows the corresponding relationship...symbols. Table 2 summarises the parameter Coherent results (cross) Noncoherent results (diamonds) Figur 1: Bit Error Rate Pe f rmance of

  19. Synergistic Allocation of Flight Expertise on the Flight Deck (SAFEdeck): A Design Concept to Combat Mode Confusion, Complacency, and Skill Loss in the Flight Deck

    NASA Technical Reports Server (NTRS)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2016-01-01

    This paper presents a new design and function allocation philosophy between pilots and automation that seeks to support the human in mitigating innate weaknesses (e.g., memory, vigilance) while enhancing their strengths (e.g., adaptability, resourcefulness). In this new allocation strategy, called Synergistic Allocation of Flight Expertise in the Flight Deck (SAFEdeck), the automation and the human provide complementary support and backup for each other. Automation is designed to be compliant with the practices of Crew Resource Management. The human takes a more active role in the normal operation of the aircraft without adversely increasing workload over the current automation paradigm. This designed involvement encourages the pilot to be engaged and ready to respond to unexpected situations. As such, the human may be less prone to error than the current automation paradigm.

  20. Quantitative transmission Raman spectroscopy of pharmaceutical tablets and capsules.

    PubMed

    Johansson, Jonas; Sparén, Anders; Svensson, Olof; Folestad, Staffan; Claybourn, Mike

    2007-11-01

    Quantitative analysis of pharmaceutical formulations using the new approach of transmission Raman spectroscopy has been investigated. For comparison, measurements were also made in conventional backscatter mode. The experimental setup consisted of a Raman probe-based spectrometer with 785 nm excitation for measurements in backscatter mode. In transmission mode the same system was used to detect the Raman scattered light, while an external diode laser of the same type was used as excitation source. Quantitative partial least squares models were developed for both measurement modes. The results for tablets show that the prediction error for an independent test set was lower for the transmission measurements with a relative root mean square error of about 2.2% as compared with 2.9% for the backscatter mode. Furthermore, the models were simpler in the transmission case, for which only a single partial least squares (PLS) component was required to explain the variation. The main reason for the improvement using the transmission mode is a more representative sampling of the tablets compared with the backscatter mode. Capsules containing mixtures of pharmaceutical powders were also assessed by transmission only. The quantitative results for the capsules' contents were good, with a prediction error of 3.6% w/w for an independent test set. The advantage of transmission Raman over backscatter Raman spectroscopy has been demonstrated for quantitative analysis of pharmaceutical formulations, and the prospects for reliable, lean calibrations for pharmaceutical analysis is discussed.

  1. Implementing a mixed-mode design for collecting administrative records: striking a balance between quality and burden

    EIA Publications

    2012-01-01

    RECS relies on actual records from energy suppliers to produce robust survey estimates of household energy consumption and expenditures. During the RECS Energy Supplier Survey (ESS), energy billing records are collected from the companies that supply electricity, natural gas, fuel oil/kerosene, and propane (LPG) to the interviewed households. As Federal agencies expand the use of administrative records to enhance, replace, or evaluate survey data, EIA has explored more flexible, reliable and efficient techniques to collect energy billing records. The ESS has historically been a mail-administered survey, but EIA introduced web data collection with the 2009 RECS ESS. In that survey, energy suppliers self-selected their reporting mode among several options: standardized paper form, on-line fillable form or spreadsheet, or failing all else, a nonstandard format of their choosing. In this paper, EIA describes where reporting mode appears to influence the data quality. We detail the reporting modes, the embedded and post-hoc quality control and consistency checks that were performed, the extent of detectable errors, and the methods used for correcting data errors. We explore by mode the levels of unit and item nonresponse, number of errors, and corrections made to the data. In summary, we find notable differences in data quality between modes and analyze where the benefits of offering these new modes outweigh the "costs".

  2. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  3. Situation assessment in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Mcmanus, John W.; Chappell, Alan R.; Arbuckle, P. Douglas

    1992-01-01

    Paladin is a real-time tactical decision generator for air combat engagements. Paladin uses specialized knowledge-based systems and other Artificial Intelligence (AI) programming techniques to address the modern air combat environment and agile aircraft in a clear and concise manner. Paladin is designed to provide insight into both the tactical benefits and the costs of enhanced agility. The system was developed using the Lisp programming language on a specialized AI workstation. Paladin utilizes a set of air combat rules, an active throttle controller, and a situation assessment module that have been implemented as a set of highly specialized knowledge-based systems. The situation assessment module was developed to determine the tactical mode of operation (aggressive, defensive, neutral, evasive, or disengagement) used by Paladin at each decision point in the air combat engagement. Paladin uses the situation assessment module; the situationally dependent modes of operation to more accurately represent the complex decision-making process of human pilots. This allows Paladin to adapt its tactics to the current situation and improves system performance. Discussed here are the details of Paladin's situation assessment and modes of operation. The results of simulation testing showing the error introduced into the situation assessment module due to estimation errors in positional and geometric data for the opponent aircraft are presented. Implementation issues for real-time performance are discussed and several solutions are presented, including Paladin's use of an inference engine designed for real-time execution.

  4. Incidence of patient safety events and process-related human failures during intra-hospital transportation of patients: retrospective exploration from the institutional incident reporting system.

    PubMed

    Yang, Shu-Hui; Jerng, Jih-Shuin; Chen, Li-Chin; Li, Yu-Tsu; Huang, Hsiao-Fang; Wu, Chao-Ling; Chan, Jing-Yuan; Huang, Szu-Fen; Liang, Huey-Wen; Sun, Jui-Sheng

    2017-11-03

    Intra-hospital transportation (IHT) might compromise patient safety because of different care settings and higher demand on the human operation. Reports regarding the incidence of IHT-related patient safety events and human failures remain limited. To perform a retrospective analysis of IHT-related events, human failures and unsafe acts. A hospital-wide process for the IHT and database from the incident reporting system in a medical centre in Taiwan. All eligible IHT-related patient safety events between January 2010 to December 2015 were included. Incidence rate of IHT-related patient safety events, human failure modes, and types of unsafe acts. There were 206 patient safety events in 2 009 013 IHT sessions (102.5 per 1 000 000 sessions). Most events (n=148, 71.8%) did not involve patient harm, and process events (n=146, 70.9%) were most common. Events at the location of arrival (n=101, 49.0%) were most frequent; this location accounted for 61.0% and 44.2% of events with patient harm and those without harm, respectively (p<0.001). Of the events with human failures (n=186), the most common related process step was the preparation of the transportation team (n=91, 48.9%). Contributing unsafe acts included perceptual errors (n=14, 7.5%), decision errors (n=56, 30.1%), skill-based errors (n=48, 25.8%), and non-compliance (n=68, 36.6%). Multivariate analysis showed that human failure found in the arrival and hand-off sub-process (OR 4.84, p<0.001) was associated with increased patient harm, whereas the presence of omission (OR 0.12, p<0.001) was associated with less patient harm. This study shows a need to reduce human failures to prevent patient harm during intra-hospital transportation. We suggest that the transportation team pay specific attention to the sub-process at the location of arrival and prevent errors other than omissions. Long-term monitoring of IHT-related events is also warranted. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Method and system for reducing errors in vehicle weighing systems

    DOEpatents

    Hively, Lee M.; Abercrombie, Robert K.

    2010-08-24

    A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.

  6. Physics and Control of Locked Modes in the DIII-D Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volpe, Francesco

    This Final Technical Report summarizes an investigation, carried out under the auspices of the DOE Early Career Award, of the physics and control of non-rotating magnetic islands (“locked modes”) in tokamak plasmas. Locked modes are one of the main causes of disruptions in present tokamaks, and could be an even bigger concern in ITER, due to its relatively high beta (favoring the formation of Neoclassical Tearing Mode islands) and low rotation (favoring locking). For these reasons, this research had the goal of studying and learning how to control locked modes in the DIII-D National Fusion Facility under ITER-relevant conditions ofmore » high pressure and low rotation. Major results included: the first full suppression of locked modes and avoidance of the associated disruptions; the demonstration of error field detection from the interaction between locked modes, applied rotating fields and intrinsic errors; the analysis of a vast database of disruptive locked modes, which led to criteria for disruption prediction and avoidance.« less

  7. Dynamical error bounds for continuum discretisation via Gauss quadrature rules—A Lieb-Robinson bound approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, M. P.; Centre for Quantum Technologies, National University of Singapore; QuTech, Delft University of Technology, Lorentzweg 1, 2611 CJ Delft

    2016-02-15

    Instances of discrete quantum systems coupled to a continuum of oscillators are ubiquitous in physics. Often the continua are approximated by a discrete set of modes. We derive error bounds on expectation values of system observables that have been time evolved under such discretised Hamiltonians. These bounds take on the form of a function of time and the number of discrete modes, where the discrete modes are chosen according to Gauss quadrature rules. The derivation makes use of tools from the field of Lieb-Robinson bounds and the theory of orthonormal polynomials.

  8. Feedback stabilization system for pulsed single longitudinal mode tunable lasers

    DOEpatents

    Esherick, Peter; Raymond, Thomas D.

    1991-10-01

    A feedback stabilization system for pulse single longitudinal mode tunable lasers having an excited laser medium contained within an adjustable length cavity and producing a laser beam through the use of an internal dispersive element, including detection of angular deviation in the output laser beam resulting from detuning between the cavity mode frequency and the passband of the internal dispersive element, and generating an error signal based thereon. The error signal can be integrated and amplified and then applied as a correcting signal to a piezoelectric transducer mounted on a mirror of the laser cavity for controlling the cavity length.

  9. Effects of dynamic aeroelasticity on handling qualities and pilot rating

    NASA Technical Reports Server (NTRS)

    Swaim, R. L.; Yen, W.-Y.

    1978-01-01

    Pilot performance parameters, such as pilot ratings, tracking errors, and pilot comments, were recorded and analyzed for a longitudinal pitch tracking task on a large, flexible aircraft. The tracking task was programmed on a fixed-base simulator with a CRT attitude director display of pitch angle command, pitch angle, and pitch angle error. Parametric variations in the undamped natural frequencies of the two lowest frequency symmetric elastic modes were made to induce varying degrees of rigid body and elastic mode interaction. The results indicate that such mode interaction can drastically affect the handling qualities and pilot ratings of the task.

  10. Channel estimation in few mode fiber mode division multiplexing transmission system

    NASA Astrophysics Data System (ADS)

    Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming

    2018-03-01

    It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.

  11. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  12. Error field penetration and locking to the backward propagating wave

    DOE PAGES

    Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.

    2015-12-30

    In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w r in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w r/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including themore » effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less

  13. A nucleotide-analogue-induced gain of function corrects the error-prone nature of human DNA polymerase iota.

    PubMed

    Ketkar, Amit; Zafar, Maroof K; Banerjee, Surajit; Marquez, Victor E; Egli, Martin; Eoff, Robert L

    2012-06-27

    Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2'-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2'-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle, which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base-stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase.

  14. A nucleotide analogue induced gain of function corrects the error-prone nature of human DNA polymerase iota

    PubMed Central

    Ketkar, Amit; Zafar, Maroof K.; Banerjee, Surajit; Marquez, Victor E.; Egli, Martin; Eoff, Robert L

    2012-01-01

    Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2′-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2′-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle (χ), which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase. PMID:22632140

  15. An experimental system for the study of active vibration control - Development and modeling

    NASA Astrophysics Data System (ADS)

    Batta, George R.; Chen, Anning

    A modular rotational vibration system designed to facilitate the study of active control of vibrating systems is discussed. The model error associated with four common types of identification problems has been studied. The general multiplicative uncertainty shape for a vibration system is small in low frequencies, large at high frequencies. The frequency-domain error function has sharp peaks near the frequency of each mode. The inability to identify a high-frequency mode causes an increase of uncertainties at all frequencies. Missing a low-frequency mode causes the uncertainties to be much larger at all frequencies than missing a high-frequency mode. Hysteresis causes a small increase of uncertainty at low frequencies, but its overall effect is relatively small.

  16. A strategy for reducing gross errors in the generalized Born models of implicit solvation

    PubMed Central

    Onufriev, Alexey V.; Sigalov, Grigori

    2011-01-01

    The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947

  17. Experimental Analysis of Dampened Breathing Mode Oscillation on Hall Thruster Performance

    DTIC Science & Technology

    2013-03-01

    38 4.5 Analysis of Discharge RMS Effect on Breathing Mode Amplitude...20 xii EXPERIMENTAL ANALYSIS OF DAMPENED BREATHING MODE OSCILLATION ON HALL EFFECT THRUSTER...the large error in the data presented above prevents many conclusions from being drawn. 4.5 Analysis of Discharge RMS Effect on Breathing Mode

  18. Measuring a Fiber-Optic Delay Line Using a Mode-Locked Laser

    NASA Technical Reports Server (NTRS)

    Tu, Meirong; McKee, Michael R.; Pak, Kyung S.; Yu, Nan

    2010-01-01

    The figure schematically depicts a laboratory setup for determining the optical length of a fiber-optic delay line at a precision greater than that obtainable by use of optical time-domain reflectometry or of mechanical measurement of length during the delay-line-winding process. In this setup, the delay line becomes part of the resonant optical cavity that governs the frequency of oscillation of a mode-locked laser. The length can then be determined from frequency-domain measurements, as described below. The laboratory setup is basically an all-fiber ring laser in which the delay line constitutes part of the ring. Another part of the ring - the laser gain medium - is an erbium-doped fiber amplifier pumped by a diode laser at a wavelength of 980 nm. The loop also includes an optical isolator, two polarization controllers, and a polarizing beam splitter. The optical isolator enforces unidirectional lasing. The polarization beam splitter allows light in only one polarization mode to pass through the ring; light in the orthogonal polarization mode is rejected from the ring and utilized as a diagnostic output, which is fed to an optical spectrum analyzer and a photodetector. The photodetector output is fed to a radio-frequency spectrum analyzer and an oscilloscope. The fiber ring laser can generate continuous-wave radiation in non-mode-locked operation or ultrashort optical pulses in mode-locked operation. The mode-locked operation exhibited by this ring is said to be passive in the sense that no electro-optical modulator or other active optical component is used to achieve it. Passive mode locking is achieved by exploiting optical nonlinearity of passive components in such a manner as to obtain ultra-short optical pulses. In this setup, the particular nonlinear optical property exploited to achieve passive mode locking is nonlinear polarization rotation. This or any ring laser can support oscillation in multiple modes as long as sufficient gain is present to overcome losses in the ring. When mode locking is achieved, oscillation occurs in all the modes having the same phase and same polarization. The frequency interval between modes, often denoted the free spectral range (FSR), is given by c/nL, where c is the speed of light in vacuum, n is the effective index of refraction of the fiber, and L is the total length of optical path around the ring. Therefore, the length of the fiber-optic delay line, as part of the length around the ring, can be calculated from the FSRs measured with and without the delay line incorporated into the ring. For this purpose, the FSR measurements are made by use of the optical and radio-frequency spectrum analyzers. In experimentation on a 10-km-long fiber-optic delay line, it was found that this setup made it possible to measure the length to within a fractional error of about 3 10(exp -6), corresponding to a length error of 3 cm. In contrast, measurements by optical time-domain reflectometry and mechanical measurement were found to be much less precise: For optical time-domain reflectometry, the fractional error was found no less than 10(exp -4) (corresponding to a length error of 1 m) and for mechanical measurement, the fractional error was found to be about 10(exp -2) (corresponding to a length error of 100 m).

  19. An a priori solar radiation pressure model for the QZSS Michibiki satellite

    NASA Astrophysics Data System (ADS)

    Zhao, Qile; Chen, Guo; Guo, Jing; Liu, Jingnan; Liu, Xianglin

    2018-02-01

    It has been noted that the satellite laser ranging (SLR) residuals of the Quasi-Zenith Satellite System (QZSS) Michibiki satellite orbits show very marked dependence on the elevation angle of the Sun above the orbital plane (i.e., the β angle). It is well recognized that the systematic error is caused by mismodeling of the solar radiation pressure (SRP). Although the error can be reduced by the updated ECOM SRP model, the orbit error is still very large when the satellite switches to orbit-normal (ON) orientation. In this study, an a priori SRP model was established for the QZSS Michibiki satellite to enhance the ECOM model. This model is expressed in ECOM's D, Y, and B axes (DYB) using seven parameters for the yaw-steering (YS) mode, and additional three parameters are used to compensate the remaining modeling deficiencies, particularly the perturbations in the Y axis, based on a redefined DYB for the ON mode. With the proposed a priori model, QZSS Michibiki's precise orbits over 21 months were determined. SLR validation indicated that the systematic β -angle-dependent error was reduced when the satellite was in the YS mode, and better than an 8-cm root mean square (RMS) was achieved. More importantly, the orbit quality was also improved significantly when the satellite was in the ON mode. Relative to ECOM and adjustable box-wing model, the proposed SRP model showed the best performance in the ON mode, and the RMS of the SLR residuals was better than 15 cm, which was a two times improvement over the ECOM without a priori model used, but was still two times worse than the YS mode.

  20. Carrier recovery methods for a dual-mode modem: A design approach

    NASA Technical Reports Server (NTRS)

    Richards, C. W.; Wilson, S. G.

    1984-01-01

    A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.

  1. Finite-time control for nonlinear spacecraft attitude based on terminal sliding mode technique.

    PubMed

    Song, Zhankui; Li, Hongxing; Sun, Kaibiao

    2014-01-01

    In this paper, a fast terminal sliding mode control (FTSMC) scheme with double closed loops is proposed for the spacecraft attitude control. The FTSMC laws are included both in an inner control loop and an outer control loop. Firstly, a fast terminal sliding surface (FTSS) is constructed, which can drive the inner loop tracking-error and the outer loop tracking-error on the FTSS to converge to zero in finite time. Secondly, FTSMC strategy is designed by using Lyaponov's method for ensuring the occurrence of the sliding motion in finite time, which can hold the character of fast transient response and improve the tracking accuracy. It is proved that FTSMC can guarantee the convergence of tracking-error in both approaching and sliding mode surface. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    NASA Astrophysics Data System (ADS)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  3. Analysis of mutational spectra by denaturant capillary electrophoresis

    PubMed Central

    Ekstrøm, Per O.; Khrapko, Konstantin; Li-Sucholeiki, Xiao-Cheng; Hunter, Ian W.; Thilly, William G.

    2009-01-01

    Numbers and kinds of point mutant within DNA from cells, tissues and human population may be discovered for nearly any 75–250bp DNA sequence. High fidelity DNA amplification incorporating a thermally stable DNA “clamp” is followed by separation by denaturing capillary electrophoresis (DCE). DCE allows for peak collection and verification sequencing. DCE in a mode of cycling temperature, e.g.+/− 5°C, CyDCE, permits high resolution of mutant sequences using computer defined analytes without preliminary optimization experiments. DNA sequencers have been modified to permit higher throughput CyDCE and a massively parallel,~25,000 capillary system, has been designed for pangenomic scans in large human populations. DCE has been used to define quantitative point mutational spectra for study a wide variety of genetic phenomena: errors of DNA polymerases, mutations induced in human cells by chemicals and irradiation, testing of human gene-common disease associations and the discovery of origins of point mutations in human development and carcinogenesis. PMID:18600220

  4. New class of photonic quantum error correction codes

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  5. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  6. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction.

    PubMed

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-10-16

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.

  7. [Improvement of team competence in the operating room : Training programs from aviation].

    PubMed

    Schmidt, C E; Hardt, F; Möller, J; Malchow, B; Schmidt, K; Bauer, M

    2010-08-01

    Growing attention has been drawn to patient safety during recent months due to media reports of clinical errors. To date only clinical incident reporting systems have been implemented in acute care hospitals as instruments of risk management. However, these systems only have a limited impact on human factors which account for the majority of all errors in medicine. Crew resource management (CRM) starts here. For the commissioning of a new hospital in Minden, training programs were installed in order to maintain patient safety in a new complex environment. The training was planned in three parts: All relevant processes were defined as standard operating procedures (SOP), visualized and then simulated in the new building. In addition, staff members (trainers) in leading positions were trained in CRM in order to train the complete staff. The training programs were analyzed by questionnaires. Selection of topics, relevance for practice and mode of presentation were rated as very good by 73% of the participants. The staff members ranked the topics communication in crisis situations, individual errors and compensating measures as most important followed by case studies and teamwork. Employees improved in compliance to the SOP, team competence and communication. In high technology environments with escalating workloads and interdisciplinary organization, staff members are confronted with increasing demands in knowledge and skills. To reduce errors under such working conditions relevant processes should be standardized and trained for the emergency situation. Human performance can be supported by well-trained interpersonal skills which are evolved in CRM training. In combination these training programs make a significant contribution to maintaining patient safety.

  8. Error Field Assessment from Driven Mode Rotation: Results from Extrap-T2R Reversed-Field-Pinch and Perspectives for ITER

    NASA Astrophysics Data System (ADS)

    Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.

    2012-10-01

    A new ITER-relevant non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the Extrap-T2R reversed field pinch. Resistive Wall Modes (RWMs) were generated and their rotation sustained by rotating magnetic perturbations. In particular, stable modes of toroidal mode number n=8 and 10 and unstable modes of n=1 were used in this experiment. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the RWMs were observed to rotate non-uniformly and be modulated in amplitude (in the case of unstable modes, the observed oscillation was superimposed to the mode growth). This behavior was used to infer the amplitude and toroidal phase of n=1, 8 and 10 EFs. The method was first tested against known, deliberately applied EFs, and then against actual intrinsic EFs. Applying equal and opposite corrections resulted in longer discharges and more uniform mode rotation, indicating good EF compensation. The results agree with a simple theoretical model. Extensions to tearing modes, to the non-uniform plasma response to rotating perturbations, and to tokamaks, including ITER, will be discussed.

  9. Determining the refractive index and thickness of thin films from prism coupler measurements

    NASA Technical Reports Server (NTRS)

    Kirsch, S. T.

    1981-01-01

    A simple method of determining thin film parameters from mode indices measured using a prism coupler is described. The problem is reduced to doing two least squares straight line fits through measured mode indices vs effective mode number. The slope and y intercept of the line are simply related to the thickness and refractive index of film, respectively. The approach takes into account the correlation between as well as the uncertainty in the individual measurements from all sources of error to give precise error tolerances on the best fit values. Due to the precision of the tolerances, anisotropic films can be identified and characterized.

  10. Perceptually tuned low-bit-rate video codec for ATM networks

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien

    1996-02-01

    In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.

  11. The VLT Interferometer and its AMBER Instrument: Simulations of Interferometric Imaging in the Wide-Field Mode

    NASA Astrophysics Data System (ADS)

    Blöcker, T.; Hofmann, K.-H.; Przygodda, F.; Weigelt, G.

    We present computer simulations of interferometric imaging with the VLT interferometer and the AMBER instrument. These simulations include both the astrophysical modelling of a stellar object by radiative transfer calculations and the simulation of light propagation from the object to the detector (through atmosphere, telescopes, and the AMBER instrument), simulation of photon noise and detector read-out noise, and finally data processing of the interferograms. The results show the dependence of the visibility error bars on the following observational parameters: different seeing during the observation of object and reference star (Fried parameters r0,object and r0,ref. ranging between 0.9 m and 1.2 m), different residual tip-tilt error (δtt,object and δtt,ref. ranging between 0.1% and 20% of the Airy disk diameter), and object brightness (Kobject=3.5 mag to 13 mag, Kref.=3.5 mag). Exemplarily, we focus on stars in late stages of stellar evolution and study one of its key objects, the dusty supergiant IRC +10 420 that is rapidly evolving on human timescales. We show computer simulations of VLT interferometry of IRC +10 420 with two ATs (wide-field mode, i.e. without fiber optics spatial filters) and discuss whether the visibility accuracy is sufficient to distinguish between different theoretical model predictions.

  12. Reducing non-collision injuries in special transportation services by enhanced safety culture.

    PubMed

    Wretstrand, Anders; Petzäll, Jan; Bylund, Per-Olof; Falkmer, Torbjörn

    2010-04-01

    Previous research has pointed out that non-collision injuries occur among wheelchair users in Special Transportation Services (STS - a demand-responsive transport mode). The organization of such modes is also quite complex, involving both stakeholders and key personnel at different levels. Our objective was therefore to qualitatively explore the state of safety, as perceived and discussed within a workplace context. Focus groups were held with drivers of both taxi companies and bus companies. The results indicated that passengers run the risk of being injured without being involved in a vehicle collision. The pertinent organizational and corporate culture did not prioritize safety. The drivers identified some relatively clear-cut safety threats, primarily before and after a ride, at vehicle standstill. The driver's work place seemed to be surrounded with a reactive instead of proactive structure. We conclude that not only vehicle and wheelchair technical safety must be considered in STS, but also system safety. Instead of viewing drivers' error as a cause, it should be seen as a symptom of systems failure. Human error is connected to aspects of tools, tasks, and operating environment. Enhanced understanding and influence of these connections within STS and accessible public transport systems will promote safety for wheelchair users. Copyright 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Performance of a Space-Based Wavelet Compressor for Plasma Count Data on the MMS Fast Plasma Investigation

    NASA Technical Reports Server (NTRS)

    Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.

    2017-01-01

    Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.

  14. Multisite Parent-Centered Risk Assessment to Reduce Pediatric Oral Chemotherapy Errors

    PubMed Central

    Walsh, Kathleen E.; Mazor, Kathleen M.; Roblin, Douglas; Biggins, Colleen; Wagner, Joann L.; Houlahan, Kathleen; Li, Justin W.; Keuker, Christopher; Wasilewski-Masker, Karen; Donovan, Jennifer; Kanaan, Abir; Weingart, Saul N.

    2013-01-01

    Purpose: Observational studies describe high rates of errors in home oral chemotherapy use in children. In hospitals, proactive risk assessment methods help front-line health care workers develop error prevention strategies. Our objective was to engage parents of children with cancer in a multisite study using proactive risk assessment methods to identify how errors occur at home and propose risk reduction strategies. Methods: We recruited parents from three outpatient pediatric oncology clinics in the northeast and southeast United States to participate in failure mode and effects analyses (FMEA). An FMEA is a systematic team-based proactive risk assessment approach in understanding ways a process can fail and develop prevention strategies. Steps included diagram the process, brainstorm and prioritize failure modes (places where things go wrong), and propose risk reduction strategies. We focused on home oral chemotherapy administration after a change in dose because prior studies identified this area as high risk. Results: Parent teams consisted of four parents at two of the sites and 10 at the third. Parents developed a 13-step process map, with two to 19 failure modes per step. The highest priority failure modes included miscommunication when receiving instructions from the clinician (caused by conflicting instructions or parent lapses) and unsafe chemotherapy handling at home. Recommended risk assessment strategies included novel uses of technology to improve parent access to information, clinicians, and other parents while at home. Conclusion: Parents of pediatric oncology patients readily participated in a proactive risk assessment method, identifying processes that pose a risk for medication errors involving home oral chemotherapy. PMID:23633976

  15. Stitching-error reduction in gratings by shot-shifted electron-beam lithography

    NASA Technical Reports Server (NTRS)

    Dougherty, D. J.; Muller, R. E.; Maker, P. D.; Forouhar, S.

    2001-01-01

    Calculations of the grating spatial-frequency spectrum and the filtering properties of multiple-pass electron-beam writing demonstrate a tradeoff between stitching-error suppression and minimum pitch separation. High-resolution measurements of optical-diffraction patterns show a 25-dB reduction in stitching-error side modes.

  16. Generation of Higher Order Modes in a Rectangular Duct

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Cabell, Randolph H.; Brown, Donald E.

    2004-01-01

    Advanced noise control methodologies to reduce sound emission from aircraft engines take advantage of the modal structure of the noise in the duct. This noise is caused by the interaction of rotor wakes with downstream obstructions such as exit guide vanes. Mode synthesis has been accomplished in circular ducts and current active noise control work has made use of this capability to cancel fan noise. The goal of the current effort is to examine the fundamental process of higher order mode propagation through an acoustically treated, curved duct. The duct cross-section is rectangular to permit greater flexibility in representation of a range of duct curvatures. The work presented is the development of a feedforward control system to generate a user-specified modal pattern in the duct. The multiple-error, filtered-x LMS algorithm is used to determine the magnitude and phase of signal input to the loudspeakers to produce a desired modal pattern at a set of error microphones. Implementation issues, including loudspeaker placement and error microphone placement, are discussed. Preliminary results from a 9-3/8 inch by 21 inch duct, using 12 loudspeakers and 24 microphones, are presented. These results demonstrate the ability of the control system to generate a user-specified mode while suppressing undesired modes.

  17. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  18. Probabilistic Causal Analysis for System Safety Risk Assessments in Commercial Air Transport

    NASA Technical Reports Server (NTRS)

    Luxhoj, James T.

    2003-01-01

    Aviation is one of the critical modes of our national transportation system. As such, it is essential that new technologies be continually developed to ensure that a safe mode of transportation becomes even safer in the future. The NASA Aviation Safety Program (AvSP) is managing the development of new technologies and interventions aimed at reducing the fatal aviation accident rate by a factor of 5 by year 2007 and by a factor of 10 by year 2022. A portfolio assessment is currently being conducted to determine the projected impact that the new technologies and/or interventions may have on reducing aviation safety system risk. This paper reports on advanced risk analytics that combine the use of a human error taxonomy, probabilistic Bayesian Belief Networks, and case-based scenarios to assess a relative risk intensity metric. A sample case is used for illustrative purposes.

  19. Impedance Control of the Rehabilitation Robot Based on Sliding Mode Control

    NASA Astrophysics Data System (ADS)

    Zhou, Jiawang; Zhou, Zude; Ai, Qingsong

    As an auxiliary treatment, the 6-DOF parallel robot plays an important role in lower limb rehabilitation. In order to improve the efficiency and flexibility of the lower limb rehabilitation training, this paper studies the impedance controller based on the position control. A nonsingular terminal sliding mode control is developed to ensure the trajectory tracking precision and in contrast to traditional PID control strategy in the inner position loop, the system will be more stable. The stability of the system is proved by Lyapunov function to guarantee the convergence of the control errors. Simulation results validate the effectiveness of the target impedance model and show that the parallel robot can adjust gait trajectory online according to the human-machine interaction force to meet the gait request of patients, and changing the impedance parameters can meet the demands of different stages of rehabilitation training.

  20. An Organizational Learning Framework for Patient Safety.

    PubMed

    Edwards, Marc T

    Despite concerted effort to improve quality and safety, high reliability remains a distant goal. Although this likely reflects the challenge of organizational change, persistent controversy over basic issues suggests that weaknesses in conceptual models may contribute. The essence of operational improvement is organizational learning. This article presents a framework for identifying leverage points for improvement based on organizational learning theory and applies it to an analysis of current practice and controversy. Organizations learn from others, from defects, from measurement, and from mindfulness. These learning modes correspond with contemporary themes of collaboration, no blame for human error, accountability for performance, and managing the unexpected. The collaborative model has dominated improvement efforts. Greater attention to the underdeveloped modes of organizational learning may foster more rapid progress in patient safety by increasing organizational capabilities, strengthening a culture of safety, and fixing more of the process problems that contribute to patient harm.

  1. Use of failure mode effect analysis (FMEA) to improve medication management process.

    PubMed

    Jain, Khushboo

    2017-03-13

    Purpose Medication management is a complex process, at high risk of error with life threatening consequences. The focus should be on devising strategies to avoid errors and make the process self-reliable by ensuring prevention of errors and/or error detection at subsequent stages. The purpose of this paper is to use failure mode effect analysis (FMEA), a systematic proactive tool, to identify the likelihood and the causes for the process to fail at various steps and prioritise them to devise risk reduction strategies to improve patient safety. Design/methodology/approach The study was designed as an observational analytical study of medication management process in the inpatient area of a multi-speciality hospital in Gurgaon, Haryana, India. A team was made to study the complex process of medication management in the hospital. FMEA tool was used. Corrective actions were developed based on the prioritised failure modes which were implemented and monitored. Findings The percentage distribution of medication errors as per the observation made by the team was found to be maximum of transcription errors (37 per cent) followed by administration errors (29 per cent) indicating the need to identify the causes and effects of their occurrence. In all, 11 failure modes were identified out of which major five were prioritised based on the risk priority number (RPN). The process was repeated after corrective actions were taken which resulted in about 40 per cent (average) and around 60 per cent reduction in the RPN of prioritised failure modes. Research limitations/implications FMEA is a time consuming process and requires a multidisciplinary team which has good understanding of the process being analysed. FMEA only helps in identifying the possibilities of a process to fail, it does not eliminate them, additional efforts are required to develop action plans and implement them. Frank discussion and agreement among the team members is required not only for successfully conducing FMEA but also for implementing the corrective actions. Practical implications FMEA is an effective proactive risk-assessment tool and is a continuous process which can be continued in phases. The corrective actions taken resulted in reduction in RPN, subjected to further evaluation and usage by others depending on the facility type. Originality/value The application of the tool helped the hospital in identifying failures in medication management process, thereby prioritising and correcting them leading to improvement.

  2. How important is mode-coupling in global surface wave tomography?

    NASA Astrophysics Data System (ADS)

    Mikesell, Dylan; Nolet, Guust; Voronin, Sergey; Ritsema, Jeroen; Van Heijst, Hendrik-Jan

    2016-04-01

    To investigate the influence of mode coupling for fundamental mode Rayleigh waves with periods between 64 and 174s, we analysed 3,505,902 phase measurements obtained along minor arc trajectories as well as 2,163,474 phases along major arcs. This is a selection of five frequency bands from the data set of Van Heijst and Woodhouse, extended with more recent earthquakes, that served to define upper mantle S velocity in model S40RTS. Since accurate estimation of the misfits (as represented by χ2) is essential, we used the method of Voronin et al. (GJI 199:276, 2014) to obtain objective estimates of the standard errors in this data set. We adapted Voronin's method slightly to avoid that systematic errors along clusters of raypaths can be accommodated by source corrections. This was done by simultaneously analysing multiple clusters of raypaths originating from the same group of earthquakes but traveling in different directions. For the minor arc data, phase errors at the one sigma level range from 0.26 rad at a period of 174s to 0.89 rad at 64s. For the major arcs, these errors are roughly twice as high (0.40 and 2.09 rad, respectively). In the subsequent inversion we removed any outliers that could not be fitted at the 3 sigma level in an almost undamped inversion. Using these error estimates and the theory of finite-frequency tomography to include the effects of scattering, we solved for models with χ2 = N (the number of data) both including and excluding the effect of mode coupling between Love and Rayleigh waves. We shall present some dramatic differences between the two models, notably near ocean-continent boundaries (e.g. California) where mode conversions are likely to be largest. But a sharpening of other features, such as cratons and high-velocity blobs in the oceanic domain, is also observed when mode coupling is taken into account. An investigation of the influence of coupling on azimuthal anisotropy is still under way at the time of writing of this abstract, but the results of this will be included in the presentation.

  3. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  4. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  5. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  6. Quality Issues in Propulsion

    NASA Technical Reports Server (NTRS)

    McCarty, John P.; Lyles, Garry M.

    1997-01-01

    Propulsion system quality is defined in this paper as having high reliability, that is, quality is a high probability of within-tolerance performance or operation. Since failures are out-of-tolerance performance, the probability of failures and their occurrence is the difference between high and low quality systems. Failures can be described at 3 levels: the system failure (which is the detectable end of a failure), the failure mode (which is the failure process), and the failure cause (which is the start). Failure causes can be evaluated & classified by type. The results of typing flight history failures shows that most failures are in unrecognized modes and result from human error or noise, i.e. failures are when engineers learn how things really work. Although the study based on US launch vehicles, a sampling of failures from other countries indicates the finding has broad application. The parameters of the design of a propulsion system are not single valued, but have dispersions associated with the manufacturing of parts. Many tests are needed to find failures, if the dispersions are large relative to tolerances, which could contribute to the large number of failures in unrecognized modes.

  7. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  8. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  9. Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

    DOE PAGES

    Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan

    2015-07-29

    Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less

  10. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  11. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  12. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  13. Human error and human factors engineering in health care.

    PubMed

    Welch, D L

    1997-01-01

    Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.

  14. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    NASA Astrophysics Data System (ADS)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  15. Liquid chromatography-tandem mass spectrometry method of loxoprofen in human plasma.

    PubMed

    Lee, Hye Won; Ji, Hye Young; Sohn, Dong Hwan; Kim, Se-Mi; Lee, Yong Bok; Lee, Hye Suk

    2009-07-01

    A rapid, sensitive and selective liquid chromatography-electrospray ionization mass spectrometric method for the determination of loxoprofen in human plasma was developed. Loxoprofen and ketoprofen (internal standard) were extracted from 20 microL of human plasma sample using ethyl acetate at acidic pH and analyzed on an Atlantis dC(18) column with the mobile phase of methanol:water (75:25, v/v). The analytes were quantified in the selected reaction monitoring mode. The standard curve was linear over the concentration range of 0.1-20 microg/mL with a lower limit of quantification of 0.1 microg/mL. The coefficient of variation and relative error for intra- and inter-assay at four quality control levels were 2.8-5.2 and 4.8-7.0%, respectively. The recoveries of loxoprofen and ketoprofen were 69.7 and 67.6%, respectively. The matrix effects for loxoprofen and ketoprofen were practically absent. This method was successfully applied to the pharmacokinetic study of loxoprofen in humans. (c) 2009 John Wiley & Sons, Ltd.

  16. Application of a Reduced Order Kalman Filter to Initialize a Coupled Atmosphere-Ocean Model: Impact on the Prediction of El Nino

    NASA Technical Reports Server (NTRS)

    Ballabrera-Poy, J.; Busalacchi, A.; Murtugudde, R.

    2000-01-01

    A reduced order Kalman Filter, based on a simplification of the Singular Evolutive Extended Kalman (SEEK) filter equations, is used to assimilate observed fields of the surface wind stress, sea surface temperature and sea level into the nonlinear coupled ocean-atmosphere model of Zebiak and Cane. The SEEK filter projects the Kalman Filter equations onto a subspace defined by the eigenvalue decomposition of the error forecast matrix, allowing its application to high dimensional systems. The Zebiak and Cane model couples a linear reduced gravity ocean model with a single vertical mode atmospheric model of Zebiak. The compatibility between the simplified physics of the model and each observed variable is studied separately and together. The results show the ability of the model to represent the simultaneous value of the wind stress, SST and sea level, when the fields are limited to the latitude band 10 deg S - 10 deg N In this first application of the Kalman Filter to a coupled ocean-atmosphere prediction model, the sea level fields are assimilated in terms of the Kelvin and Rossby modes of the thermocline depth anomaly. An estimation of the error of these modes is derived from the projection of an estimation of the sea level error over such modes. This method gives a value of 12 for the error of the Kelvin amplitude, and 6 m of error for the Rossby component of the thermocline depth. The ability of the method to reconstruct the state of the equatorial Pacific and predict its time evolution is demonstrated. The method is shown to be quite robust for predictions up to six months, and able to predict the onset of the 1997 warm event fifteen months before its occurrence.

  17. Application of a Reduced Order Kalman Filter to Initialize a Coupled Atmosphere-Ocean Model: Impact on the Prediction of El Nino

    NASA Technical Reports Server (NTRS)

    Ballabrera-Poy, Joaquim; Busalacchi, Antonio J.; Murtugudde, Ragu

    2000-01-01

    A reduced order Kalman Filter, based on a simplification of the Singular Evolutive Extended Kalman (SEEK) filter equations, is used to assimilate observed fields of the surface wind stress, sea surface temperature and sea level into the nonlinear coupled ocean-atmosphere model. The SEEK filter projects the Kalman Filter equations onto a subspace defined by the eigenvalue decomposition of the error forecast matrix, allowing its application to high dimensional systems. The Zebiak and Cane model couples a linear reduced gravity ocean model with a single vertical mode atmospheric model of Zebiak. The compatibility between the simplified physics of the model and each observed variable is studied separately and together. The results show the ability of the model to represent the simultaneous value of the wind stress, SST and sea level, when the fields are limited to the latitude band 10 deg S - 10 deg N. In this first application of the Kalman Filter to a coupled ocean-atmosphere prediction model, the sea level fields are assimilated in terms of the Kelvin and Rossby modes of the thermocline depth anomaly. An estimation of the error of these modes is derived from the projection of an estimation of the sea level error over such modes. This method gives a value of 12 for the error of the Kelvin amplitude, and 6 m of error for the Rossby component of the thermocline depth. The ability of the method to reconstruct the state of the equatorial Pacific and predict its time evolution is demonstrated. The method is shown to be quite robust for predictions I up to six months, and able to predict the onset of the 1997 warm event fifteen months before its occurrence.

  18. Using failure mode and effects analysis to plan implementation of smart i.v. pump technology.

    PubMed

    Wetterneck, Tosha B; Skibinski, Kathleen A; Roberts, Tanita L; Kleppin, Susan M; Schroeder, Mark E; Enloe, Myra; Rough, Steven S; Hundt, Ann Schoofs; Carayon, Pascale

    2006-08-15

    Failure mode and effects analysis (FMEA) was used to evaluate a smart i.v. pump as it was implemented into a redesigned medication-use process. A multidisciplinary team conducted a FMEA to guide the implementation of a smart i.v. pump that was designed to prevent pump programming errors. The smart i.v. pump was equipped with a dose-error reduction system that included a pre-defined drug library in which dosage limits were set for each medication. Monitoring for potential failures and errors occurred for three months postimplementation of FMEA. Specific measures were used to determine the success of the actions that were implemented as a result of the FMEA. The FMEA process at the hospital identified key failure modes in the medication process with the use of the old and new pumps, and actions were taken to avoid errors and adverse events. I.V. pump software and hardware design changes were also recommended. Thirteen of the 18 failure modes reported in practice after pump implementation had been identified by the team. A beneficial outcome of FMEA was the development of a multidisciplinary team that provided the infrastructure for safe technology implementation and effective event investigation after implementation. With the continual updating of i.v. pump software and hardware after implementation, FMEA can be an important starting place for safe technology choice and implementation and can produce site experts to follow technology and process changes over time. FMEA was useful in identifying potential problems in the medication-use process with the implementation of new smart i.v. pumps. Monitoring for system failures and errors after implementation remains necessary.

  19. 25-Gbit/s burst-mode optical receiver using high-speed avalanche photodiode for 100-Gbit/s optical packet switching.

    PubMed

    Nada, Masahiro; Nakamura, Makoto; Matsuzaki, Hideaki

    2014-01-13

    25-Gbit/s error-free operation of an optical receiver is successfully demonstrated against burst-mode optical input signals without preambles. The receiver, with a high-sensitivity avalanche photodiode and burst-mode transimpedance amplifier, exhibits sufficient receiver sensitivity and an extremely quick response suitable for burst-mode operation in 100-Gbit/s optical packet switching.

  20. On the Error State Selection for Stationary SINS Alignment and Calibration Kalman Filters—Part II: Observability/Estimability Analysis

    PubMed Central

    Silva, Felipe O.; Hemerly, Elder M.; Leite Filho, Waldemar C.

    2017-01-01

    This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions. PMID:28241494

  1. Use of FMEA analysis to reduce risk of errors in prescribing and administering drugs in paediatric wards: a quality improvement report

    PubMed Central

    Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio

    2012-01-01

    Objective Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Design and setting Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. Primary outcome To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. Results In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. Conclusions FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children. PMID:23253870

  2. Use of FMEA analysis to reduce risk of errors in prescribing and administering drugs in paediatric wards: a quality improvement report.

    PubMed

    Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio

    2012-01-01

    Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children.

  3. Action-Effect Associations in Voluntary and Cued Task-Switching.

    PubMed

    Sommer, Angelika; Lukas, Sarah

    2017-01-01

    The literature of action control claims that humans control their actions in two ways. In the stimulus-based approach, actions are triggered by external stimuli. In the ideomotor approach, actions are elicited endogenously and controlled by the intended goal. In the current study, our purpose was to investigate whether these two action control modes affect task-switching differently. We combined a classical task-switching paradigm with action-effect learning. Both experiments consisted of two experimental phases: an acquisition phase, in which associations between task, response and subsequent action effects were learned and a test phase, in which the effects of these associations were tested on task performance by presenting the former action effects as preceding effects, prior to the task (called practiced effects ). Subjects either chose freely between tasks (ideomotor action control mode) or they were cued as to which task to perform (sensorimotor action control mode). We aimed to replicate the consistency effect (i.e., task is chosen according to the practiced task-effect association) and non-reversal advantage (i.e., better task performance when the practiced effect matches the previously learned task-effect association). Our results suggest that participants acquired stable action-effect associations independently of the learning mode. The consistency effect (Experiment 1) could be shown, independent of the learning mode, but only on the response-level. The non-reversal advantage (Experiment 2) was only evident in the error rates and only for participants who had practiced in the ideomotor action control mode.

  4. Action-Effect Associations in Voluntary and Cued Task-Switching

    PubMed Central

    Sommer, Angelika; Lukas, Sarah

    2018-01-01

    The literature of action control claims that humans control their actions in two ways. In the stimulus-based approach, actions are triggered by external stimuli. In the ideomotor approach, actions are elicited endogenously and controlled by the intended goal. In the current study, our purpose was to investigate whether these two action control modes affect task-switching differently. We combined a classical task-switching paradigm with action-effect learning. Both experiments consisted of two experimental phases: an acquisition phase, in which associations between task, response and subsequent action effects were learned and a test phase, in which the effects of these associations were tested on task performance by presenting the former action effects as preceding effects, prior to the task (called practiced effects). Subjects either chose freely between tasks (ideomotor action control mode) or they were cued as to which task to perform (sensorimotor action control mode). We aimed to replicate the consistency effect (i.e., task is chosen according to the practiced task-effect association) and non-reversal advantage (i.e., better task performance when the practiced effect matches the previously learned task-effect association). Our results suggest that participants acquired stable action-effect associations independently of the learning mode. The consistency effect (Experiment 1) could be shown, independent of the learning mode, but only on the response-level. The non-reversal advantage (Experiment 2) was only evident in the error rates and only for participants who had practiced in the ideomotor action control mode. PMID:29387027

  5. Human error in airway facilities.

    DOT National Transportation Integrated Search

    2001-01-01

    This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...

  6. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Updating finite element dynamic models using an element-by-element sensitivity methodology

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Hemez, Francois M.

    1993-01-01

    A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.

  8. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  9. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    PubMed

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  10. Effective force control by muscle synergies.

    PubMed

    Berger, Denise J; d'Avella, Andrea

    2014-01-01

    Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4-5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination.

  11. Response mode-dependent differences in neurofunctional networks during response inhibition: an EEG-beamforming study.

    PubMed

    Dippel, Gabriel; Chmielewski, Witold; Mückschel, Moritz; Beste, Christian

    2016-11-01

    Response inhibition processes are one of the most important executive control functions and have been subject to intense research in cognitive neuroscience. However, knowledge on the neurophysiology and functional neuroanatomy on response inhibition is biased because studies usually employ experimental paradigms (e.g., sustained attention to response task, SART) in which behavior is susceptible to impulsive errors. Here, we investigate whether there are differences in neurophysiological mechanisms and networks depending on the response mode that predominates behavior in a response inhibition task. We do so comparing a SART with a traditionally formatted task paradigm. We use EEG-beamforming in two tasks inducing opposite response modes during action selection. We focus on theta frequency modulations, since these are implicated in cognitive control processes. The results show that a response mode that is susceptible to impulsive errors (response mode used in the SART) is associated with stronger theta band activity in the left temporo-parietal junction. The results suggest that the response modes applied during response inhibition differ in the encoding of surprise signals, or related processes of attentional sampling. Response modes during response inhibition seem to differ in processes necessary to update task representations relevant to behavioral control.

  12. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  13. Q-mode versus R-mode principal component analysis for linear discriminant analysis (LDA)

    NASA Astrophysics Data System (ADS)

    Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz

    2017-05-01

    Many literature apply Principal Component Analysis (PCA) as either preliminary visualization or variable con-struction methods or both. Focus of PCA can be on the samples (R-mode PCA) or variables (Q-mode PCA). Traditionally, R-mode PCA has been the usual approach to reduce high-dimensionality data before the application of Linear Discriminant Analysis (LDA), to solve classification problems. Output from PCA composed of two new matrices known as loadings and scores matrices. Each matrix can then be used to produce a plot, i.e. loadings plot aids identification of important variables whereas scores plot presents spatial distribution of samples on new axes that are also known as Principal Components (PCs). Fundamentally, the scores matrix always be the input variables for building classification model. A recent paper uses Q-mode PCA but the focus of analysis was not on the variables but instead on the samples. As a result, the authors have exchanged the use of both loadings and scores plots in which clustering of samples was studied using loadings plot whereas scores plot has been used to identify important manifest variables. Therefore, the aim of this study is to statistically validate the proposed practice. Evaluation is based on performance of external error obtained from LDA models according to number of PCs. On top of that, bootstrapping was also conducted to evaluate the external error of each of the LDA models. Results show that LDA models produced by PCs from R-mode PCA give logical performance and the matched external error are also unbiased whereas the ones produced with Q-mode PCA show the opposites. With that, we concluded that PCs produced from Q-mode is not statistically stable and thus should not be applied to problems of classifying samples, but variables. We hope this paper will provide some insights on the disputable issues.

  14. The estimation of pointing angle and normalized surface scattering cross section from GEOS-3 radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Brown, G. S.; Curry, W. J.

    1977-01-01

    The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.

  15. Direct Geolocation of TerraSAR-X Spotlight Mode Image and Error Correction

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Zeng, Qiming; Jiao, Jian; Zhang, Jingfa; Gong, Lixia

    2013-01-01

    The GERMAN TerraSAR-X mission was launched in June 2007, operating a versatile new-generation SAR sensor in X-band. Its Spotlight mode providing SAR images at very high resolution of about 1m. The product’s specified 3-D geolocation accuracy is tightened to 1m according to the official technical report. However, this accuracy is able to be achieved relies on not only robust mathematical basis of SAR geolocation, but also well knowledge of error sources and their correction. The research focuses on geolocation of TerraSAR-X spotlight image. Mathematical model and resolving algorithms have been analyzed. Several error sources have been researched and corrected especially. The effectiveness and accuracy of the research was verified by the experiment results.

  16. Self-error-rejecting photonic qubit transmission in polarization-spatial modes with linear optical elements

    NASA Astrophysics Data System (ADS)

    Jiang, YuXiao; Guo, PengLiang; Gao, ChengYan; Wang, HaiBo; Alzahrani, Faris; Hobiny, Aatef; Deng, FuGuo

    2017-12-01

    We present an original self-error-rejecting photonic qubit transmission scheme for both the polarization and spatial states of photon systems transmitted over collective noise channels. In our scheme, we use simple linear-optical elements, including half-wave plates, 50:50 beam splitters, and polarization beam splitters, to convert spatial-polarization modes into different time bins. By using postselection in different time bins, the success probability of obtaining the uncorrupted states approaches 1/4 for single-photon transmission, which is not influenced by the coefficients of noisy channels. Our self-error-rejecting transmission scheme can be generalized to hyperentangled n-photon systems and is useful in practical high-capacity quantum communications with photon systems in two degrees of freedom.

  17. Experimental and theoretical studies of active control of resistive wall mode growth in the EXTRAP T2R reversed-field pinch

    NASA Astrophysics Data System (ADS)

    Drake, J. R.; Brunsell, P. R.; Yadikin, D.; Cecconello, M.; Malmberg, J. A.; Gregoratto, D.; Paccagnella, R.; Bolzonella, T.; Manduchi, G.; Marrelli, L.; Ortolani, S.; Spizzo, G.; Zanca, P.; Bondeson, A.; Liu, Y. Q.

    2005-07-01

    Active feedback control of resistive wall modes (RWMs) has been demonstrated in the EXTRAP T2R reversed-field pinch experiment. The control system includes a sensor consisting of an array of magnetic coils (measuring mode harmonics) and an actuator consisting of a saddle coil array (producing control harmonics). Closed-loop (feedback) experiments using a digital controller based on a real time Fourier transform of sensor data have been studied for cases where the feedback gain was constant and real for all harmonics (corresponding to an intelligent-shell) and cases where the feedback gain could be set for selected harmonics, with both real and complex values (targeted harmonics). The growth of the dominant RWMs can be reduced by feedback for both the intelligent-shell and targeted-harmonic control systems. Because the number of toroidal positions of the saddle coils in the array is half the number of the sensors, it is predicted and observed experimentally that the control harmonic spectrum has sidebands. Individual unstable harmonics can be controlled with real gains. However if there are two unstable mode harmonics coupled by the sideband effect, control is much less effective with real gains. According to the theory, complex gains give better results for (slowly) rotating RWMs, and experiments support this prediction. In addition, open loop experiments have been used to observe the effects of resonant field errors applied to unstable, marginally stable and robustly stable modes. The observed effects of field errors are consistent with the thin-wall model, where mode growth is proportional to the resonant field error amplitude and the wall penetration time for that mode harmonic.

  18. Comparison of gating methods for the real-time analysis of left ventricular function in nonimaging blood pool studies.

    PubMed

    Beard, B B; Stewart, J R; Shiavi, R G; Lorenz, C H

    1995-01-01

    Gating methods developed for electrocardiographic-triggered radionuclide ventriculography are being used with nonimaging detectors. These methods have not been compared on the basis of their real-time performance or suitability for determination of load-independent indexes of left ventricular function. This work evaluated the relative merits of different gating methods for nonimaging radionuclude ventriculographic studies, with particular emphasis on their suitability for real-time measurements and the determination of pressure-volume loops. A computer model was used to investigate the relative accuracy of forward gating, backward gating, and phase-mode gating. The durations of simulated left ventricular time-activity curves were randomly varied. Three acquisition parameters were considered: frame rate, acceptance window, and sample size. Twenty-five studies were performed for each combination of acquisition parameters. Hemodynamic and shape parameters from each study were compared with reference parameters derived directly from the random time-activity curves. Backward gating produced the largest errors under all conditions. For both forward gating and phase-mode gating, ejection fraction was underestimated and time to end systole and normalized peak ejection rate were overestimated. For the hemodynamic parameters, forward gating was marginally superior to phase-mode gating. The mean difference in errors between forward and phase-mode gating was 1.47% (SD 2.78%). However, for root mean square shape error, forward gating was several times worse in every case and seven times worse than phase-mode gating on average. Both forward and phase-mode gating are suitable for real-time hemodynamic measurements by nonimaging techniques. The small statistical difference between the methods is not clinically significant. The true shape of the time-activity curve is maintained most accurately by phase-mode gating.

  19. Comparison of gating methods for the real-time analysis of left ventricular function in nonimaging blood pool studies

    PubMed Central

    Beard, Brian B.; Stewart, James R.; Shiavi, Richard G.; Lorenz, Christine H.

    2018-01-01

    Background Gating methods developed for electrocardiographic-triggered radionuclide ventriculography are being used with nonimaging detectors. These methods have not been compared on the basis of their real-time performance or suitability for determination of load-independent indexes of left ventricular function. This work evaluated the relative merits of different gating methods for nonimaging radionuclude ventriculographic studies, with particular emphasis on their suitability for real-time measurements and the determination of pressure-volume loops. Methods and Results A computer model was used to investigate the relative accuracy of forward gating, backward gating, and phase-mode gating. The durations of simulated left ventricular time-activity curves were randomly varied. Three acquisition parameters were considered: frame rate, acceptance window, and sample size. Twenty-five studies were performed for each combination of acquisition parameters. Hemodynamic and shape parameters from each study were compared with reference parameters derived directly from the random time-activity curves. Backward gating produced the largest errors under all conditions. For both forward gating and phase-mode gating, ejection fraction was underestimated and time to end systole and normalized peak ejection rate were overestimated. For the hemodynamic parameters, forward gating was marginally superior to phase-mode gating. The mean difference in errors between forward and phase-mode gating was 1.47% (SD 2.78%). However, for root mean square shape error, forward gating was several times worse in every case and seven times worse than phase-mode gating on average. Conclusions Both forward and phase-mode gating are suitable for real-time hemodynamic measurements by nonimaging techniques. The small statistical difference between the methods is not clinically significant. The true shape of the time-activity curve is maintained most accurately by phase-mode gating. PMID:9420820

  20. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  1. Human-system Interfaces to Automatic Systems: Review Guidance and Technical Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OHara, J.M.; Higgins, J.C.

    Automation has become ubiquitous in modern complex systems and commercial nuclear power plants are no exception. Beyond the control of plant functions and systems, automation is applied to a wide range of additional functions including monitoring and detection, situation assessment, response planning, response implementation, and interface management. Automation has become a 'team player' supporting plant personnel in nearly all aspects of plant operation. In light of the increasing use and importance of automation in new and future plants, guidance is needed to enable the NRC staff to conduct safety reviews of the human factors engineering (HFE) aspects of modern automation.more » The objective of the research described in this report was to develop guidance for reviewing the operator's interface with automation. We first developed a characterization of the important HFE aspects of automation based on how it is implemented in current systems. The characterization included five dimensions: Level of automation, function of automation, modes of automation, flexibility of allocation, and reliability of automation. Next, we reviewed literature pertaining to the effects of these aspects of automation on human performance and the design of human-system interfaces (HSIs) for automation. Then, we used the technical basis established by the literature to develop design review guidance. The guidance is divided into the following seven topics: Automation displays, interaction and control, automation modes, automation levels, adaptive automation, error tolerance and failure management, and HSI integration. In addition, we identified insights into the automaton design process, operator training, and operations.« less

  2. Fourier mode analysis of slab-geometry transport iterations in spatially periodic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, E; Zika, M

    1999-04-01

    We describe a Fourier analysis of the diffusion-synthetic acceleration (DSA) and transport-synthetic acceleration (TSA) iteration schemes for a spatially periodic, but otherwise arbitrarily heterogeneous, medium. Both DSA and TSA converge more slowly in a heterogeneous medium than in a homogeneous medium composed of the volume-averaged scattering ratio. In the limit of a homogeneous medium, our heterogeneous analysis contains eigenvalues of multiplicity two at ''resonant'' wave numbers. In the presence of material heterogeneities, error modes corresponding to these resonant wave numbers are ''excited'' more than other error modes. For DSA and TSA, the iteration spectral radius may occur at these resonantmore » wave numbers, in which case the material heterogeneities most strongly affect iterative performance.« less

  3. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  4. Free-space optics mode-wavelength division multiplexing system using LG modes based on decision feedback equalization

    NASA Astrophysics Data System (ADS)

    Amphawan, Angela; Ghazi, Alaan; Al-dawoodi, Aras

    2017-11-01

    A free-space optics mode-wavelength division multiplexing (MWDM) system using Laguerre-Gaussian (LG) modes is designed using decision feedback equalization for controlling mode coupling and combating inter symbol interference so as to increase channel diversity. In this paper, a data rate of 24 Gbps is achieved for a FSO MWDM channel of 2.6 km in length using feedback equalization. Simulation results show significant improvement in eye diagrams and bit-error rates before and after decision feedback equalization.

  5. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    NASA Astrophysics Data System (ADS)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  6. Crew/Automation Interaction in Space Transportation Systems: Lessons Learned from the Glass Cockpit

    NASA Technical Reports Server (NTRS)

    Rudisill, Marianne

    2000-01-01

    The progressive integration of automation technologies in commercial transport aircraft flight decks - the 'glass cockpit' - has had a major, and generally positive, impact on flight crew operations. Flight deck automation has provided significant benefits, such as economic efficiency, increased precision and safety, and enhanced functionality within the crew interface. These enhancements, however, may have been accrued at a price, such as complexity added to crew/automation interaction that has been implicated in a number of aircraft incidents and accidents. This report briefly describes 'glass cockpit' evolution. Some relevant aircraft accidents and incidents are described, followed by a more detailed description of human/automation issues and problems (e.g., crew error, monitoring, modes, command authority, crew coordination, workload, and training). This paper concludes with example principles and guidelines for considering 'glass cockpit' human/automation integration within space transportation systems.

  7. Thermodynamics of Anharmonic Systems: Uncoupled Mode Approximations for Molecules

    DOE PAGES

    Li, Yi-Pei; Bell, Alexis T.; Head-Gordon, Martin

    2016-05-26

    The partition functions, heat capacities, entropies, and enthalpies of selected molecules were calculated using uncoupled mode (UM) approximations, where the full-dimensional potential energy surface for internal motions was modeled as a sum of independent one-dimensional potentials for each mode. The computational cost of such approaches scales the same with molecular size as standard harmonic oscillator vibrational analysis using harmonic frequencies (HO hf). To compute thermodynamic properties, a computational protocol for obtaining the energy levels of each mode was established. The accuracy of the UM approximation depends strongly on how the one-dimensional potentials of each modes are defined. If the potentialsmore » are determined by the energy as a function of displacement along each normal mode (UM-N), the accuracies of the calculated thermodynamic properties are not significantly improved versus the HO hf model. Significant improvements can be achieved by constructing potentials for internal rotations and vibrations using the energy surfaces along the torsional coordinates and the remaining vibrational normal modes, respectively (UM-VT). For hydrogen peroxide and its isotopologs at 300 K, UM-VT captures more than 70% of the partition functions on average. By con trast, the HO hf model and UM-N can capture no more than 50%. For a selected test set of C2 to C8 linear and branched alkanes and species with different moieties, the enthalpies calculated using the HO hf model, UM-N, and UM-VT are all quite accurate comparing with reference values though the RMS errors of the HO model and UM-N are slightly higher than UM-VT. However, the accuracies in entropy calculations differ significantly between these three models. For the same test set, the RMS error of the standard entropies calculated by UM-VT is 2.18 cal mol -1 K -1 at 1000 K. By contrast, the RMS error obtained using the HO model and UM-N are 6.42 and 5.73 cal mol -1 K -1, respectively. For a test set composed of nine alkanes ranging from C5 to C8, the heat capacities calculated with the UM-VT model agree with the experimental values to within a RMS error of 0.78 cal mol -1 K -1 , which is less than one-third of the RMS error of the HO hf (2.69 cal mol -1 K -1) and UM-N (2.41 cal mol -1 K -1) models.« less

  8. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  9. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  10. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  11. Timing the Mode Switch in a Sequential Mixed-Mode Survey: An Experimental Evaluation of the Impact on Final Response Rates, Key Estimates, and Costs

    PubMed Central

    Wagner, James; Schroeder, Heather M.; Piskorowski, Andrew; Ursano, Robert J.; Stein, Murray B.; Heeringa, Steven G.; Colpe, Lisa J.

    2017-01-01

    Mixed-mode surveys need to determine a number of design parameters that may have a strong influence on costs and errors. In a sequential mixed-mode design with web followed by telephone, one of these decisions is when to switch modes. The web mode is relatively inexpensive but produces lower response rates. The telephone mode complements the web mode in that it is relatively expensive but produces higher response rates. Among the potential negative consequences, delaying the switch from web to telephone may lead to lower response rates if the effectiveness of the prenotification contact materials is reduced by longer time lags, or if the additional e-mail reminders to complete the web survey annoy the sampled person. On the positive side, delaying the switch may decrease the costs of the survey. We evaluate these costs and errors by experimentally testing four different timings (1, 2, 3, or 4 weeks) for the mode switch in a web–telephone survey. This experiment was conducted on the fourth wave of a longitudinal study of the mental health of soldiers in the U.S. Army. We find that the different timings of the switch in the range of 1–4 weeks do not produce differences in final response rates or key estimates but longer delays before switching do lead to lower costs. PMID:28943717

  12. Differential laser-induced perturbation spectroscopy and fluorescence imaging for biological and materials sensing

    NASA Astrophysics Data System (ADS)

    Burton, Dallas Jonathan

    The field of laser-based diagnostics has been a topic of research in various fields, more specifically for applications in environmental studies, military defense technologies, and medicine, among many others. In this dissertation, a novel laser-based optical diagnostic method, differential laser-induced perturbation spectroscopy (DLIPS), has been implemented in a spectroscopy mode and expanded into an imaging mode in combination with fluorescence techniques. The DLIPS method takes advantage of deep ultraviolet (UV) laser perturbation at sub-ablative energy fluences to photochemically cleave bonds and alter fluorescence signal response before and after perturbation. The resulting difference spectrum or differential image adds more information about the target specimen, and can be used in combination with traditional fluorescence techniques for detection of certain materials, characterization of many materials and biological specimen, and diagnosis of various human skin conditions. The differential aspect allows for mitigation of patient or sample variation, and has the potential to develop into a powerful, noninvasive optical sensing tool. The studies in this dissertation encompass efforts to continue the fundamental research on DLIPS including expansion of the method to an imaging mode. Five primary studies have been carried out and presented. These include the use of DLIPS in a spectroscopy mode for analysis of nitrogen-based explosives on various substrates, classification of Caribbean fruit flies versus Caribbean fruit flies that have been irradiated with gamma rays, and diagnosis of human skin cancer lesions. The nitrogen-based explosives and Caribbean fruit flies have been analyzed with the DLIPS scheme using the imaging modality, providing complementary information to the spectroscopic scheme. In each study, a comparison between absolute fluorescence signals and DLIPS responses showed that DLIPS statistically outperformed traditional fluorescence techniques with regards to regression error and classification.

  13. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  14. Analysis of human factors effects on the safety of transporting radioactive waste materials: Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abkowitz, M.D.; Abkowitz, S.B.; Lepofsky, M.

    1989-04-01

    This report examines the extent of human factors effects on the safety of transporting radioactive waste materials. It is seen principally as a scoping effort, to establish whether there is a need for DOE to undertake a more formal approach to studying human factors in radioactive waste transport, and if so, logical directions for that program to follow. Human factors effects are evaluated on driving and loading/transfer operations only. Particular emphasis is placed on the driving function, examining the relationship between human error and safety as it relates to the impairment of driver performance. Although multi-modal in focus, the widespreadmore » availability of data and previous literature on truck operations resulted in a primary study focus on the trucking mode from the standpoint of policy development. In addition to the analysis of human factors accident statistics, the report provides relevant background material on several policies that have been instituted or are under consideration, directed at improving human reliability in the transport sector. On the basis of reported findings, preliminary policy areas are identified. 71 refs., 26 figs., 5 tabs.« less

  15. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  16. Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors

    DTIC Science & Technology

    2017-05-01

    Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by

  17. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  18. The contributions of human factors on human error in Malaysia aviation maintenance industries

    NASA Astrophysics Data System (ADS)

    Padil, H.; Said, M. N.; Azizan, A.

    2018-05-01

    Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.

  19. Force Analysis and Energy Operation of Chaotic System of Permanent-Magnet Synchronous Motor

    NASA Astrophysics Data System (ADS)

    Qi, Guoyuan; Hu, Jianbing

    2017-12-01

    The disadvantage of a nondimensionalized model of a permanent-magnet synchronous Motor (PMSM) is identified. The original PMSM model is transformed into a Kolmogorov system to aid dynamic force analysis. The vector field of the PMSM is analogous to the force field including four types of torque — inertial, internal, dissipative, and generalized external. Using the feedback thought, the error torque between external torque and dissipative torque is identified. The pitchfork bifurcation of the PMSM is performed. Four forms of energy are identified for the system — kinetic, potential, dissipative, and supplied. The physical interpretations of the decomposition of force and energy exchange are given. Casimir energy is stored energy, and its rate of change is the error power between the dissipative energy and the energy supplied to the motor. Error torque and error power influence the different types of dynamic modes. The Hamiltonian energy and Casimir energy are compared to find the function of each in producing the dynamic modes. A supremum bound for the chaotic attractor is proposed using the error power and Lagrange multiplier.

  20. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  1. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  2. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2014-01-01

    Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.

  3. Design Guidance for Computer-Based Procedures for Field Workers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron

    Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, themore » U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying on other sources of additional information to ensure a functional and accurate understanding of the current plant status (Converse, 1995; Fink, Killian, Hanes, and Naser, 2009; Le Blanc, Oxstrand, and Waicosky, 2012). This report provides design guidance to be used when designing the human-system interaction and the design of the graphical user interface for a CBP system. The guidance is based on human factors research related to the design and usability of CBPs conducted by Idaho National Laboratory, 2012 - 2016.« less

  4. ECG fiducial point extraction using switching Kalman filter.

    PubMed

    Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian

    2018-04-01

    In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction

    NASA Astrophysics Data System (ADS)

    Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.

    2017-12-01

    Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.

  6. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  7. Resistive wall mode feedback control in EXTRAP T2R with improved steady-state error and transient response

    NASA Astrophysics Data System (ADS)

    Brunsell, P. R.; Olofsson, K. E. J.; Frassinetti, L.; Drake, J. R.

    2007-10-01

    Experiments in the EXTRAP T2R reversed field pinch [P. R. Brunsell, H. Bergsåker, M. Cecconello et al., Plasma Phys. Control. Fusion 43, 1457 (2001)] on feedback control of m =1 resistive wall modes (RWMs) are compared with simulations using the cylindrical linear magnetohydrodynamic model, including the dynamics of the active coils and power amplifiers. Stabilization of the main RWMs (n=-11,-10,-9,-8,+5,+6) is shown using modest loop gains of the order G ˜1. However, other marginally unstable RWMs (n=-2,-1,+1,+2) driven by external field errors are only partially canceled at these gains. The experimental system stability limit is confirmed by simulations showing that the latency of the digital controller ˜50μs is degrading the system gain margin. The transient response is improved with a proportional-plus-derivative controller, and steady-state error is improved with a proportional-plus-integral controller. Suppression of all modes is obtained at high gain G ˜10 using a proportional-plus-integral-plus-derivative controller.

  8. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  9. A Comparative Study of Heavy Ion and Proton Induced Bit Error Sensitivity and Complex Burst Error Modes in Commercially Available High Speed SiGe BiCMOS

    NASA Technical Reports Server (NTRS)

    Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb

    2004-01-01

    A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.

  10. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    PubMed

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  11. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  12. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  13. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  14. Error Characterization and Mitigation for 16Nm MLC NAND Flash Memory Under Total Ionizing Dose Effect

    NASA Technical Reports Server (NTRS)

    Li, Yue (Inventor); Bruck, Jehoshua (Inventor)

    2018-01-01

    A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.

  15. Identification and assessment of common errors in the admission process of patients in Isfahan Fertility and Infertility Center based on "failure modes and effects analysis".

    PubMed

    Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila

    2016-01-01

    Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.

  16. Identification and assessment of common errors in the admission process of patients in Isfahan Fertility and Infertility Center based on “failure modes and effects analysis”

    PubMed Central

    Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila

    2016-01-01

    Background: Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of “failure modes and effects analysis” (FMEA). Materials and Methods: In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members’ decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Results: Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. Conclusions: The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors. PMID:28194208

  17. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  18. General Aviation Avionics Statistics.

    DTIC Science & Technology

    1980-12-01

    designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42

  19. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  20. Korean Air Lines Flight 007: Lessons from the Past and Insights for the Future

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Shafto, M. (Technical Monitor)

    2001-01-01

    The majority of the problems pilot encounter when using automated systems center around two factors: (1) the pilot has an incomplete and inadequate model of how the autopilot works; and (2) the displays and flight manuals, provided to the pilot, are inadequate for the task. The tragic accident of Korean Air Lines Flight 007, a Boeing 747 that deviated from its intended flight path, provides a compelling case-study of problems related to pilots' use of automated systems. This paper describes what had happened and exposes two types of human-automation interaction problems: (1) The pilots of KAL were not provided with adequate information about the actual behavior of the autopilot and its mode transition logic; and (2) The autopilot onboard KAL 007 did not provide adequate information to the flight crew about its active and armed modes. Both factors, according to the International Civil Aviation Organization (1993) report on the accident, contributed to the aircraft's lethal navigation error.

  1. Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)

    1999-01-01

    Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.

  2. White matter integrity in brain networks relevant to anxiety and depression: evidence from the human connectome project dataset.

    PubMed

    De Witte, Nele A J; Mueller, Sven C

    2017-12-01

    Anxiety and depression are associated with altered communication within global brain networks and between these networks and the amygdala. Functional connectivity studies demonstrate an effect of anxiety and depression on four critical brain networks involved in top-down attentional control (fronto-parietal network; FPN), salience detection and error monitoring (cingulo-opercular network; CON), bottom-up stimulus-driven attention (ventral attention network; VAN), and default mode (default mode network; DMN). However, structural evidence on the white matter (WM) connections within these networks and between these networks and the amygdala is lacking. The current study in a large healthy sample (n = 483) observed that higher trait anxiety-depression predicted lower WM integrity in the connections between amygdala and specific regions of the FPN, CON, VAN, and DMN. We discuss the possible consequences of these anatomical alterations for cognitive-affective functioning and underscore the need for further theory-driven research on individual differences in anxiety and depression on brain structure.

  3. Intervention strategies for the management of human error

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  4. Security evaluation of the quantum key distribution system with two-mode squeezed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osaki, M.; Ban, M.

    2003-08-01

    The quantum key distribution (QKD) system with two-mode squeezed states has been demonstrated by Pereira et al. [Phys. Rev. A 62, 042311 (2000)]. They evaluate the security of the system based on the signal to noise ratio attained by a homodyne detector. In this paper, we discuss its security based on the error probability individually attacked by eavesdropper with the unambiguous or the error optimum detection. The influence of the energy loss at transmission channels is also taken into account. It will be shown that the QKD system is secure under these conditions.

  5. Sliding mode control for Mars entry based on extended state observer

    NASA Astrophysics Data System (ADS)

    Lu, Kunfeng; Xia, Yuanqing; Shen, Ganghui; Yu, Chunmei; Zhou, Liuyu; Zhang, Lijun

    2017-11-01

    This paper addresses high-precision Mars entry guidance and control approach via sliding mode control (SMC) and Extended State Observer (ESO). First, differential flatness (DF) approach is applied to the dynamic equations of the entry vehicle to represent the state variables more conveniently. Then, the presented SMC law can guarantee the property of finite-time convergence of tracking error, which requires no information on high uncertainties that are estimated by ESO, and the rigorous proof of tracking error convergence is given. Finally, Monte Carlo simulation results are presented to demonstrate the effectiveness of the suggested approach.

  6. Analysis of using EMG and mechanical sensors to enhance intent recognition in powered lower limb prostheses

    NASA Astrophysics Data System (ADS)

    Young, A. J.; Kuiken, T. A.; Hargrove, L. J.

    2014-10-01

    Objective. The purpose of this study was to determine the contribution of electromyography (EMG) data, in combination with a diverse array of mechanical sensors, to locomotion mode intent recognition in transfemoral amputees using powered prostheses. Additionally, we determined the effect of adding time history information using a dynamic Bayesian network (DBN) for both the mechanical and EMG sensors. Approach. EMG signals from the residual limbs of amputees have been proposed to enhance pattern recognition-based intent recognition systems for powered lower limb prostheses, but mechanical sensors on the prosthesis—such as inertial measurement units, position and velocity sensors, and load cells—may be just as useful. EMG and mechanical sensor data were collected from 8 transfemoral amputees using a powered knee/ankle prosthesis over basic locomotion modes such as walking, slopes and stairs. An offline study was conducted to determine the benefit of different sensor sets for predicting intent. Main results. EMG information was not as accurate alone as mechanical sensor information (p < 0.05) for any classification strategy. However, EMG in combination with the mechanical sensor data did significantly reduce intent recognition errors (p < 0.05) both for transitions between locomotion modes and steady-state locomotion. The sensor time history (DBN) classifier significantly reduced error rates compared to a linear discriminant classifier for steady-state steps, without increasing the transitional error, for both EMG and mechanical sensors. Combining EMG and mechanical sensor data with sensor time history reduced the average transitional error from 18.4% to 12.2% and the average steady-state error from 3.8% to 1.0% when classifying level-ground walking, ramps, and stairs in eight transfemoral amputee subjects. Significance. These results suggest that a neural interface in combination with time history methods for locomotion mode classification can enhance intent recognition performance; this strategy should be considered for future real-time experiments.

  7. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  8. How to deal with multiple binding poses in alchemical relative protein-ligand binding free energy calculations.

    PubMed

    Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle

    2015-06-09

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges.

  9. How To Deal with Multiple Binding Poses in Alchemical Relative Protein–Ligand Binding Free Energy Calculations

    PubMed Central

    2016-01-01

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the ligands. This improved the root-mean-square error (RMSE) for the predicted binding free energy from 1.9 kcal/mol with the original partial charges to 1.3 kcal/mol with the corrected partial charges. PMID:26085821

  10. A method to estimate the effect of deformable image registration uncertainties on daily dose mapping

    PubMed Central

    Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin

    2012-01-01

    Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766

  11. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  12. An Evaluation of Departmental Radiation Oncology Incident Reports: Anticipating a National Reporting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric

    Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less

  13. More than a Game...Teaching in the Gamic Mode: Disciplinary Knowledge, Digital Literacy, and Collaboration

    ERIC Educational Resources Information Center

    Clyde, Jerremie; Wilkinson, Glenn R.

    2012-01-01

    The gamic mode is an innovative way of authoring scholarly history that goes beyond the printed text or digital simulations by using digital game technologies to allow the reader to interact with a scholarly argument through meaningful choice and trial and error. The gamic mode makes the way in which the past is constructed as history explicit by…

  14. Clutch pressure estimation for a power-split hybrid transmission using nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Zhou, Bin; Zhang, Jianwu; Gao, Ji; Yu, Haisheng; Liu, Dong

    2018-06-01

    For a power-split hybrid transmission, using the brake clutch to realize the transition from electric drive mode to hybrid drive mode is an available strategy. Since the pressure information of the brake clutch is essential for the mode transition control, this research designs a nonlinear robust reduced-order observer to estimate the brake clutch pressure. Model uncertainties or disturbances are considered as additional inputs, thus the observer is designed in order that the error dynamics is input-to-state stable. The nonlinear characteristics of the system are expressed as the lookup tables in the observer. Moreover, the gain matrix of the observer is solved by two optimization procedures under the constraints of the linear matrix inequalities. The proposed observer is validated by offline simulation and online test, the results have shown that the observer achieves significant performance during the mode transition, as the estimation error is within a reasonable range, more importantly, it is asymptotically stable.

  15. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    PubMed

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  16. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Elucidation of cross-species proteomic effects in human and hominin bone proteome identification through a bioinformatics experiment.

    PubMed

    Welker, F

    2018-02-20

    The study of ancient protein sequences is increasingly focused on the analysis of older samples, including those of ancient hominins. The analysis of such ancient proteomes thereby potentially suffers from "cross-species proteomic effects": the loss of peptide and protein identifications at increased evolutionary distances due to a larger number of protein sequence differences between the database sequence and the analyzed organism. Error-tolerant proteomic search algorithms should theoretically overcome this problem at both the peptide and protein level; however, this has not been demonstrated. If error-tolerant searches do not overcome the cross-species proteomic issue then there might be inherent biases in the identified proteomes. Here, a bioinformatics experiment is performed to test this using a set of modern human bone proteomes and three independent searches against sequence databases at increasing evolutionary distances: the human (0 Ma), chimpanzee (6-8 Ma) and orangutan (16-17 Ma) reference proteomes, respectively. Incorrectly suggested amino acid substitutions are absent when employing adequate filtering criteria for mutable Peptide Spectrum Matches (PSMs), but roughly half of the mutable PSMs were not recovered. As a result, peptide and protein identification rates are higher in error-tolerant mode compared to non-error-tolerant searches but did not recover protein identifications completely. Data indicates that peptide length and the number of mutations between the target and database sequences are the main factors influencing mutable PSM identification. The error-tolerant results suggest that the cross-species proteomics problem is not overcome at increasing evolutionary distances, even at the protein level. Peptide and protein loss has the potential to significantly impact divergence dating and proteome comparisons when using ancient samples as there is a bias towards the identification of conserved sequences and proteins. Effects are minimized between moderately divergent proteomes, as indicated by almost complete recovery of informative positions in the search against the chimpanzee proteome (≈90%, 6-8 Ma). This provides a bioinformatic background to future phylogenetic and proteomic analysis of ancient hominin proteomes, including the future description of novel hominin amino acid sequences, but also has negative implications for the study of fast-evolving proteins in hominins, non-hominin animals, and ancient bacterial proteins in evolutionary contexts.

  18. A radiation tolerant Data link board for the ATLAS Tile Cal upgrade

    NASA Astrophysics Data System (ADS)

    Åkerstedt, H.; Bohm, C.; Muschter, S.; Silverstein, S.; Valdes, E.

    2016-01-01

    This paper describes the latest, full-functionality revision of the high-speed data link board developed for the Phase-2 upgrade of ATLAS hadronic Tile Calorimeter. The link board design is highly redundant, with digital functionality implemented in two Xilinx Kintex-7 FPGAs, and two Molex QSFP+ electro-optic modules with uplinks run at 10 Gbps. The FPGAs are remotely configured through two radiation-hard CERN GBTx deserialisers (GBTx), which also provide the LHC-synchronous system clock. The redundant design eliminates virtually all single-point error modes, and a combination of triple-mode redundancy (TMR), internal and external scrubbing will provide adequate protection against radiation-induced errors. The small portion of the FPGA design that cannot be protected by TMR will be the dominant source of radiation-induced errors, even if that area is small.

  19. An innovative multimodal virtual platform for communication with devices in a natural way

    NASA Astrophysics Data System (ADS)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  20. Development of a sliding mode control model for quiet upright stance.

    PubMed

    Zhang, Hongbo; Nussbaum, Maury A; Agnew, Michael J

    2016-02-01

    Human upright stance appears maintained or controlled intermittently, through some combination of passive and active ankle torques, respectively representing intrinsic and contractile contributions of the ankle musculature. Several intermittent postural control models have been proposed, though it has been challenging to accurately represent actual kinematics and kinetics and to separately estimate passive and active ankle torque components. Here, a simplified single-segment, 2D (sagittal plane) sliding mode control model was developed for application to track kinematics and kinetics during upright stance. The model was implemented and evaluated using previous experimental data consisting of whole body angular kinematics and ankle torques. Tracking errors for the whole-body center-of-mass (COM) angle and angular velocity, as well as ankle torque, were all within ∼10% of experimental values, though tracking performance for COM angular acceleration was substantially poorer. The model also enabled separate estimates of the contributions of passive and active ankle torques, with overall contributions estimated here to be 96% and 4% of the total ankle torque, respectively. Such a model may have future utility in understanding human postural control, though additional work is needed, such as expanding the model to multiple segments and to three dimensions. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  2. Failure mode and effective analysis ameliorate awareness of medical errors: a 4-year prospective observational study in critically ill children.

    PubMed

    Daverio, Marco; Fino, Giuliana; Luca, Brugnaro; Zaggia, Cristina; Pettenazzo, Andrea; Parpaiola, Antonella; Lago, Paola; Amigoni, Angela

    2015-12-01

    Errors in are estimated to occur with an incidence of 3.7-16.6% in hospitalized patients. The application of systems for detection of adverse events is becoming a widespread reality in healthcare. Incident reporting (IR) and failure mode and effective analysis (FMEA) are strategies widely used to detect errors, but no studies have combined them in the setting of a pediatric intensive care unit (PICU). The aim of our study was to describe the trend of IR in a PICU and evaluate the effect of FMEA application on the number and severity of the errors detected. With this prospective observational study, we evaluated the frequency IR documented in standard IR forms completed from January 2009 to December 2012 in the PICU of Woman's and Child's Health Department of Padova. On the basis of their severity, errors were classified as: without outcome (55%), with minor outcome (16%), with moderate outcome (10%), and with major outcome (3%); 16% of reported incidents were 'near misses'. We compared the data before and after the introduction of FMEA. Sixty-nine errors were registered, 59 (86%) concerning drug therapy (83% during prescription). Compared to 2009-2010, in 2011-2012, we noted an increase of reported errors (43 vs 26) with a reduction of their severity (21% vs 8% 'near misses' and 65% vs 38% errors with no outcome). With the introduction of FMEA, we obtained an increased awareness in error reporting. Application of these systems will improve the quality of healthcare services. © 2015 John Wiley & Sons Ltd.

  3. Impact of toroidal and poloidal mode spectra on the control of non-axisymmetric fields in tokamaks

    NASA Astrophysics Data System (ADS)

    Lanctot, Matthew J.

    2016-10-01

    In several tokamaks, non-axisymmetric magnetic field studies show applied n=2 fields can lead to disruptive n=1 locked modes, suggesting nonlinear mode coupling. A multimode plasma response to n=2 fields can be observed in H-mode plasmas, in contrast to the single-mode response found in Ohmic plasmas. These effects highlight a role for n >1 error field correction in disruption avoidance, and identify additional degrees of freedom for 3D field optimization at high plasma pressure. In COMPASS, EAST, and DIII-D Ohmic plasmas, n=2 magnetic reconnection thresholds in otherwise stable discharges are readily accessed at edge safety factors q 3 and low density. Similar to previous studies, the thresholds are correlated with the ``overlap'' field for the dominant linear ideal MHD plasma mode calculated with the IPEC code. The overlap field measures the plasma-mediated coupling of the external field to the resonant field. Remarkably, the critical overlap fields are similar for n=1 and 2 fields with m >nq fields dominating the drive for resonant fields. Complementary experiments in RFX-Mod show fields with m 1 control, including the need for multiple rows of coils to control selected plasma parameters for specific functions (e.g., rotation control or ELM suppression). Optimal multi-harmonic (n=1 and n=2) error field control may be achieved using control algorithms that continuously respond to time-varying 3D field sources and plasma parameters. Supported by the US DOE under DE-FC02-04ER54698.

  4. SU-F-T-245: The Investigation of Failure Mode and Effects Analysis and PDCA for the Radiotherapy Risk Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Wang, J; P, J

    2016-06-15

    Purpose: To optimize the clinical processes of radiotherapy and to reduce the radiotherapy risks by implementing the powerful risk management tools of failure mode and effects analysis(FMEA) and PDCA(plan-do-check-act). Methods: A multidiciplinary QA(Quality Assurance) team from our department consisting of oncologists, physicists, dosimetrists, therapists and administrator was established and an entire workflow QA process management using FMEA and PDCA tools was implemented for the whole treatment process. After the primary process tree was created, the failure modes and Risk priority numbers(RPNs) were determined by each member, and then the RPNs were averaged after team discussion. Results: 3 of 9 failuremore » modes with RPN above 100 in the practice were identified in the first PDCA cycle, which were further analyzed to investigate the RPNs: including of patient registration error, prescription error and treating wrong patient. New process controls reduced the occurrence, or detectability scores from the top 3 failure modes. Two important corrective actions reduced the highest RPNs from 300 to 50, and the error rate of radiotherapy decreased remarkably. Conclusion: FMEA and PDCA are helpful in identifying potential problems in the radiotherapy process, which was proven to improve the safety, quality and efficiency of radiation therapy in our department. The implementation of the FMEA approach may improve the understanding of the overall process of radiotherapy while may identify potential flaws in the whole process. Further more, repeating the PDCA cycle can bring us closer to the goal: higher safety and accuracy radiotherapy.« less

  5. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  6. GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA

    EPA Science Inventory



    In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...

  7. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbee, D; McCarthy, A; Galavis, P

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less

  9. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  10. Erroneous cardiac ECG-gated PET list-mode trigger events can be retrospectively identified and replaced by an offline reprocessing approach: first results in rodents

    NASA Astrophysics Data System (ADS)

    Böning, Guido; Todica, Andrei; Vai, Alessandro; Lehner, Sebastian; Xiong, Guoming; Mille, Erik; Ilhan, Harun; la Fougère, Christian; Bartenstein, Peter; Hacker, Marcus

    2013-11-01

    The assessment of left ventricular function, wall motion and myocardial viability using electrocardiogram (ECG)-gated [18F]-FDG positron emission tomography (PET) is widely accepted in human and in preclinical small animal studies. The nonterminal and noninvasive approach permits repeated in vivo evaluations of the same animal, facilitating the assessment of temporal changes in disease or therapy response. Although well established, gated small animal PET studies can contain erroneous gating information, which may yield to blurred images and false estimation of functional parameters. In this work, we present quantitative and visual quality control (QC) methods to evaluate the accuracy of trigger events in PET list-mode and physiological data. Left ventricular functional analysis is performed to quantify the effect of gating errors on the end-systolic and end-diastolic volumes, and on the ejection fraction (EF). We aim to recover the cardiac functional parameters by the application of the commonly established heart rate filter approach using fixed ranges based on a standardized population. In addition, we propose a fully reprocessing approach which retrospectively replaces the gating information of the PET list-mode file with appropriate list-mode decoding and encoding software. The signal of a simultaneously acquired ECG is processed using standard MATLAB vector functions, which can be individually adapted to reliably detect the R-peaks. Finally, the new trigger events are inserted into the PET list-mode file. A population of 30 mice with various health statuses was analyzed and standard cardiac parameters such as mean heart rate (119 ms ± 11.8 ms) and mean heart rate variability (1.7 ms ± 3.4 ms) derived. These standard parameter ranges were taken into account in the QC methods to select a group of nine optimal gated and a group of eight sub-optimal gated [18F]-FDG PET scans of mice from our archive. From the list-mode files of the optimal gated group, we randomly deleted various fractions (5% to 60%) of contained trigger events to generate a corrupted group. The filter approach was capable to correct the corrupted group and yield functional parameters with no significant difference to the optimal gated group. We successfully demonstrated the potential of the fully reprocessing approach by applying it to the sub-optimal group, where the functional parameters were significantly improved after reprocessing (mean EF from 41% ± 16% to 60% ± 13%). When applied to the optimal gated group the fully reprocessing approach did not alter the functional parameters significantly (mean EF from 64% ± 8% to 64 ± 7%). This work presents methods to determine and quantify erroneous gating in small animal gated [18F]-FDG PET scans. We demonstrate the importance of a quality check for cardiac triggering contained in PET list-mode data and the benefit of optionally reprocessing the fully recorded physiological information to retrospectively modify or fully replace the cardiac triggering in PET list-mode data. We aim to provide a preliminary guideline of how to proceed in the presence of errors and demonstrate that offline reprocessing by filtering erroneous trigger events and retrospective gating by ECG processing is feasible. Future work will focus on the extension by additional QC methods, which may exploit the amplitude of trigger events and ECG signal by means of pattern recognition. Furthermore, we aim to transfer the proposed QC methods and the fully reprocessing approach to human myocardial PET/CT.

  11. Task Analytic Models to Guide Analysis and Design: Use of the Operator Function Model to Represent Pilot-Autoflight System Mode Problems

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Mitchell, Christine M.; Chappell, Alan R.; Shafto, Mike (Technical Monitor)

    1995-01-01

    Task-analytic models structure essential information about operator interaction with complex systems, in this case pilot interaction with the autoflight system. Such models serve two purposes: (1) they allow researchers and practitioners to understand pilots' actions; and (2) they provide a compact, computational representation needed to design 'intelligent' aids, e.g., displays, assistants, and training systems. This paper demonstrates the use of the operator function model to trace the process of mode engagements while a pilot is controlling an aircraft via the, autoflight system. The operator function model is a normative and nondeterministic model of how a well-trained, well-motivated operator manages multiple concurrent activities for effective real-time control. For each function, the model links the pilot's actions with the required information. Using the operator function model, this paper describes several mode engagement scenarios. These scenarios were observed and documented during a field study that focused on mode engagements and mode transitions during normal line operations. Data including time, ATC clearances, altitude, system states, and active modes and sub-modes, engagement of modes, were recorded during sixty-six flights. Using these data, seven prototypical mode engagement scenarios were extracted. One scenario details the decision of the crew to disengage a fully automatic mode in favor of a semi-automatic mode, and the consequences of this action. Another describes a mode error involving updating aircraft speed following the engagement of a speed submode. Other scenarios detail mode confusion at various phases of the flight. This analysis uses the operator function model to identify three aspects of mode engagement: (1) the progress of pilot-aircraft-autoflight system interaction; (2) control/display information required to perform mode management activities; and (3) the potential cause(s) of mode confusion. The goal of this paper is twofold: (1) to demonstrate the use of the operator functio model methodology to describe pilot-system interaction while engaging modes And monitoring the system, and (2) to initiate a discussion of how task-analytic models might inform design processes. While the operator function model is only one type of task-analytic representation, the hypothesis of this paper is that some type of task analytic structure is a prerequisite for the design of effective human-automation interaction.

  12. Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer.

    PubMed

    Hagan, Teresa L; Belcher, Sarah M; Donovan, Heidi S

    2017-09-01

    Researchers administering surveys seek to balance data quality, sources of error, and practical concerns when selecting an administration mode. Rarely are decisions about survey administration based on the background of study participants, although socio-demographic characteristics like age, education, and race may contribute to participants' (non)responses. In this study, we describe differences in paper- and web-based surveys administered in a national cancer survivor study of women with a history of cancer to compare the ability of each survey administrative mode to provide quality, generalizable data. We compared paper- and web-based survey data by socio-demographic characteristics of respondents, missing data rates, scores on primary outcome measure, and administrative costs and time using descriptive statistics, tests of mean group differences, and linear regression. Our findings indicate that more potentially vulnerable patients preferred paper questionnaires and that data quality, responses, and costs significantly varied by mode and participants' demographic information. We provide targeted suggestions for researchers conducting survey research to reduce survey error and increase generalizability of study results to the patient population of interest. Researchers must carefully weigh the pros and cons of survey administration modes to ensure a representative sample and high-quality data. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  13. Active Control of Fan-Generated Tone Noise

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.

    1995-01-01

    This paper reports on an experiment to control the noise radiated from the inlet of a ducted fan using a time domain active adaptive system. The control ,sound source consists of loudspeakers arranged in a ring around the fan duct. The error sensor location is in the fan duct. The purpose of this experiment is to demonstrate that the in-duct error sensor reduces the mode spillover in the far field, thereby increasing the efficiency of the control system. The control system is found to reduce the blade passage frequency tone significantly in the acoustic far field when the mode orders of the noise source and of the control source are the same, when the dominant wave in the duct is a plane wave. The presence of higher order modes in the duct reduces the noise reduction efficiency, particularly near the mode cut-on where the standing wave component is strong, but the control system converges stably. The control system is stable and converges when the first circumferential mode is generated in the duct. The control system is found to reduce the fan noise in the far field on an arc around the fan inlet by as much as 20 dB with none of the sound amplification associated with mode spillover.

  14. Novel spot size converter for coupling standard single mode fibers to SOI waveguides

    NASA Astrophysics Data System (ADS)

    Sisto, Marco Michele; Fisette, Bruno; Paultre, Jacques-Edmond; Paquet, Alex; Desroches, Yan

    2016-03-01

    We have designed and numerically simulated a novel spot size converter for coupling standard single mode fibers with 10.4μm mode field diameter to 500nm × 220nm SOI waveguides. Simulations based on the eigenmode expansion method show a coupling loss of 0.4dB at 1550nm for the TE mode at perfect alignment. The alignment tolerance on the plane normal to the fiber axis is evaluated at +/-2.2μm for <=1dB excess loss, which is comparable to the alignment tolerance between two butt-coupled standard single mode fibers. The converter is based on a cross-like arrangement of SiOxNy waveguides immersed in a 12μm-thick SiO2 cladding region deposited on top of the SOI chip. The waveguides are designed to collectively support a single degenerate mode for TE and TM polarizations. This guided mode features a large overlap to the LP01 mode of standard telecom fibers. Along the spot size converter length (450μm), the mode is first gradually confined in a single SiOxNy waveguide by tapering its width. Then, the mode is adiabatically coupled to a SOI waveguide underneath the structure through a SOI inverted taper. The shapes of SiOxNy and SOI tapers are optimized to minimize coupling loss and structure length, and to ensure adiabatic mode evolution along the structure, thus improving the design robustness to fabrication process errors. A tolerance analysis based on conservative microfabrication capabilities suggests that coupling loss penalty from fabrication errors can be maintained below 0.3dB. The proposed spot size converter is fully compliant to industry standard microfabrication processes available at INO.

  15. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  16. Sensitivity of STIS First-OrderMedium Resolution Modes

    NASA Astrophysics Data System (ADS)

    Proffitt, Charles R.

    2006-07-01

    The sensitivities for STIS first-order medium resolution modes were redetermined usingon-orbit observations of the standard DA white dwarfs G 191-B2B, GD 71, and GD 153.We review the procedures and assumptions used to derive the adopted throughputs, and discuss the remaining errors and uncertainties.

  17. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  18. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  19. Active Control of Low-Speed Fan Tonal Noise Using Actuators Mounted in Stator Vanes: Part III Results

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.; Remington, Paul J.; Walker, Bruce E.

    2003-01-01

    A test program to demonstrate simplification of Active Noise Control (ANC) systems relative to standard techniques was performed on the NASA Glenn Active Noise Control Fan from May through September 2001. The target mode was the m = 2 circumferential mode generated by the rotor-stator interaction at 2BPF. Seven radials (combined inlet and exhaust) were present at this condition. Several different error-sensing strategies were implemented. Integration of the error-sensors with passive treatment was investigated. These were: (i) an in-duct linear axial array, (ii) an induct steering array, (iii) a pylon-mounted array, and (iv) a near-field boom array. The effect of incorporating passive treatment was investigated as well as reducing the actuator count. These simplified systems were compared to a fully ANC specified system. Modal data acquired using the Rotating Rake are presented for a range of corrected fan rpm. Simplified control has been demonstrated to be possible but requires a well-known and dominant mode signature. The documented results here in are part III of a three-part series of reports with the same base title. Part I and II document the control system and error-sensing design and implementation.

  20. Simulations of a PSD Plastic Neutron Collar for Assaying Fresh Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hausladen, Paul; Newby, Jason; McElroy, Robert Dennis

    The potential performance of a notional active coincidence collar for assaying uranium fuel based on segmented detectors constructed from the new PSD plastic fast organic scintillator with pulse shape discrimination capability was investigated in simulation. Like the International Atomic Energy Agency's present Uranium Neutron Collar for LEU (UNCL), the PSD plastic collar would also function by stimulating fission in the 235U content of the fuel with a moderated 241Am/Li neutron source and detecting instances of induced fission via neutron coincidence counting. In contrast to the moderated detectors of the UNCL, the fast time scale of detection in the scintillator eliminatesmore » statistical errors due to accidental coincidences that limit the performance of the UNCL. However, the potential to detect a single neutron multiple times historically has been one of the properties of organic scintillator detectors that has prevented their adoption for international safeguards applications. Consequently, as part of the analysis of simulated data, a method was developed by which true neutron-neutron coincidences can be distinguished from inter-detector scatter that takes advantage of the position and timing resolution of segmented detectors. Then, the performance of the notional simulated coincidence collar was evaluated for assaying a variety of fresh fuels, including some containing burnable poisons and partial defects. In these simulations, particular attention was paid to the analysis of fast mode measurements. In fast mode, a Cd liner is placed inside the collar to shield the fuel from the interrogating source and detector moderators, thereby eliminating the thermal neutron flux that is most sensitive to the presence of burnable poisons that are ubiquitous in modern nuclear fuels. The simulations indicate that the predicted precision of fast mode measurements is similar to what can be achieved by the present UNCL in thermal mode. For example, the statistical accuracy of a ten-minute measurement of fission coincidences collected in fast mode will be approximately 1% for most fuels of interest, yielding a ~1.4% error after subtraction of a five minute measurement of the spontaneous fissions from 238U in the fuel, a ~2% error in analyzed linear density after accounting for the slope of the calibration curve, and a ~2.9% total error after addition of an assumed systematic error of 2%.« less

  1. Integrated five-port non-blocking optical router based on mode-selective property

    NASA Astrophysics Data System (ADS)

    Jia, Hao; Zhou, Ting; Fu, Xin; Ding, Jianfeng; Zhang, Lei; Yang, Lin

    2018-05-01

    In this paper, we propose and demonstrate a five-port optical router based on mode-selective property. It utilizes different combinations of four spatial modes at input and output ports as labels to distinguish its 20 routing paths. It can direct signals from the source port to the destination port intelligently without power consumption and additional switching time to realize various path steering. The proposed architecture is constructed by asymmetric directional coupler based mode-multiplexers/de-multiplexers, multimode interference based waveguide crossings and single-mode interconnect waveguides. The broad optical bandwidths of these constituents make the device suitable to combine with wavelength division multiplexing signal transmission, which can effectively increase the data throughput. Measurement results show that the insertion loss of its 20 routing paths are lower than 8.5 dB and the optical signal-to-noise ratios are larger than 16.3 dB at 1525-1565 nm. To characterize its routing functionality, a 40-Gbps data transmission with bit-error-rate (BER) measurement is implemented. The power penalties for the error-free switching (BER<10-9) are 1.0 dB and 0.8 dB at 1545 nm and 1565 nm, respectively.

  2. On decentralized adaptive full-order sliding mode control of multiple UAVs.

    PubMed

    Xiang, Xianbo; Liu, Chao; Su, Housheng; Zhang, Qin

    2017-11-01

    In this study, a novel decentralized adaptive full-order sliding mode control framework is proposed for the robust synchronized formation motion of multiple unmanned aerial vehicles (UAVs) subject to system uncertainty. First, a full-order sliding mode surface in a decentralized manner is designed to incorporate both the individual position tracking error and the synchronized formation error while the UAV group is engaged in building a certain desired geometric pattern in three dimensional space. Second, a decentralized virtual plant controller is constructed which allows the embedded low-pass filter to attain the chattering free property of the sliding mode controller. In addition, robust adaptive technique is integrated in the decentralized chattering free sliding control design in order to handle unknown bounded uncertainties, without requirements for assuming a priori knowledge of bounds on the system uncertainties as stated in conventional chattering free control methods. Subsequently, system robustness as well as stability of the decentralized full-order sliding mode control of multiple UAVs is synthesized. Numerical simulation results illustrate the effectiveness of the proposed control framework to achieve robust 3D formation flight of the multi-UAV system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc

    A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less

  4. Integrated model reference adaptive control and time-varying angular rate estimation for micro-machined gyroscopes

    NASA Astrophysics Data System (ADS)

    Tsai, Nan-Chyuan; Sue, Chung-Yang

    2010-02-01

    Owing to the imposed but undesired accelerations such as quadrature error and cross-axis perturbation, the micro-machined gyroscope would not be unconditionally retained at resonant mode. Once the preset resonance is not sustained, the performance of the micro-gyroscope is accordingly degraded. In this article, a direct model reference adaptive control loop which is integrated with a modified disturbance estimating observer (MDEO) is proposed to guarantee the resonant oscillations at drive mode and counterbalance the undesired disturbance mainly caused by quadrature error and cross-axis perturbation. The parameters of controller are on-line innovated by the dynamic error between the MDEO output and expected response. In addition, Lyapunov stability theory is employed to examine the stability of the closed-loop control system. Finally, the efficacy of numerical evaluation on the exerted time-varying angular rate, which is to be detected and measured by the gyroscope, is verified by intensive simulations.

  5. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  6. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  7. Data and results of a laboratory investigation of microprocessor upset caused by simulated lightning-induced analog transients

    NASA Technical Reports Server (NTRS)

    Belcastro, C. M.

    1984-01-01

    Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.

  8. Air Force Academy Homepage

    Science.gov Websites

    Chaplain Corps Cadet Chapel Community Center Chapel Institutional Review Board Not Human Subjects Research Requirements 7 Not Human Subjects Research Form 8 Researcher Instructions - Activities Submitted to DoD IRB 9 Review 18 Not Human Subjects Errors 19 Exempt Research Most Frequent Errors 20 Most Frequent Errors for

  9. Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.

    DOT National Transportation Integrated Search

    2002-07-01

    Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...

  10. Human Error: The Stakes Are Raised.

    ERIC Educational Resources Information Center

    Greenberg, Joel

    1980-01-01

    Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)

  11. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  12. Effective force control by muscle synergies

    PubMed Central

    Berger, Denise J.; d'Avella, Andrea

    2014-01-01

    Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4–5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination. PMID:24860489

  13. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    NASA Astrophysics Data System (ADS)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  14. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  15. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    NASA Technical Reports Server (NTRS)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  16. Study on the stability and reliability of Clinotron at Y-band

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Wang, Jianguo; Chen, Zaigao; Wang, Guangqiang; Wang, Dongyang; Teng, Yan

    2017-11-01

    To improve the stability and reliability of Clinotron at the Y-band, some key issues are researched, such as the synchronous operating mode, the heat accumulation on the slow-wave structure, and the errors in micro-fabrication. By analyzing the dispersion relationship, the working mode is determined as the TM10 mode. The problem of heat dissipation on a comb is researched to make a trade-off on the choice of suitable working conditions, making sure that the safety and efficiency of the device are guaranteed simultaneously. The study on the effect of tolerance on device's performance is also conducted to determine the acceptable error during micro-fabrication. The validity of the device and the cost for fabrication are both taken into consideration. At last, the performance of Clinotron under the optimized conditions demonstrates that it can work steadily at 315.89 GHz and the output power is about 12 W, showing advanced stability and reliability.

  17. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. Failure Modes and Effects Analysis of bilateral same-day cataract surgery

    PubMed Central

    Shorstein, Neal H.; Lucido, Carol; Carolan, James; Liu, Liyan; Slean, Geraldine; Herrinton, Lisa J.

    2017-01-01

    PURPOSE To systematically analyze potential process failures related to bilateral same-day cataract surgery toward the goal of improving patient safety. SETTING Twenty-one Kaiser Permanente surgery centers, Northern California, USA. DESIGN Retrospective cohort study. METHODS Quality experts performed a Failure Modes and Effects Analysis (FMEA) that included an evaluation of sterile processing, pharmaceuticals, perioperative clinic and surgical center visits, and biometry. Potential failures in human factors and communication (modes) were identified. Rates of endophthalmitis, toxic anterior segment syndrome (TASS), and unintended intraocular lens (IOL) implantation were assessed in eyes having bilateral same-day surgery from 2010 through 2014. RESULTS The study comprised 4754 eyes. The analysis identified 15 significant potential failure modes. These included lapses in instrument processing and compounding error of intracameral antibiotic that could lead to endophthalmitis or TASS and ambiguous documentation of IOL selection by surgeons, which could lead to unintended IOL implantation. Of the study sample, 1 eye developed endophthalmitis, 1 eye had unintended IOL implantation (rates, 2 per 10 000; 95% confidence intervals [CI] 0.1–12.0 per 10 000), and no eyes developed TASS (upper 95% CI, 8 per 10 000). Recommendations included improving oversight of cleaning and sterilization practices, separating lots of compounded drugs for each eye, and enhancing IOL verification procedures. CONCLUSIONS Potential failure modes and recommended actions in bilateral same-day cataract surgery were determined using a FMEA. These findings might help improve the reliability and safety of bilateral same-day cataract surgery based on current evidence and standards. PMID:28410711

  19. Safety Strategies in an Academic Radiation Oncology Department and Recommendations for Action

    PubMed Central

    Terezakis, Stephanie A.; Pronovost, Peter; Harris, Kendra; DeWeese, Theodore; Ford, Eric

    2013-01-01

    Background Safety initiatives in the United States continue to work on providing guidance as to how the average practitioner might make patients safer in the face of the complex process by which radiation therapy (RT), an essential treatment used in the management of many patients with cancer, is prepared and delivered. Quality control measures can uncover certain specific errors such as machine dose mis-calibration or misalignments of the patient in the radiation treatment beam. However, they are less effective at uncovering less common errors that can occur anywhere along the treatment planning and delivery process, and even when the process is functioning as intended, errors still occur. Prioritizing Risks and Implementing Risk-Reduction Strategies Activities undertaken at the radiation oncology department at the Johns Hopkins Hospital (Baltimore) include Failure Mode and Effects Analysis (FMEA), risk-reduction interventions, and voluntary error and near-miss reporting systems. A visual process map portrayed 269 RT steps occurring among four subprocesses—including consult, simulation, treatment planning, and treatment delivery. Two FMEAs revealed 127 and 159 possible failure modes, respectively. Risk-reduction interventions for 15 “top-ranked” failure modes were implemented. Since the error and near-miss reporting system’s implementation in the department in 2007, 253 events have been logged. However, the system may be insufficient for radiation oncology, for which a greater level of practice-specific information is required to fully understand each event. Conclusions The “basic science” of radiation treatment has received considerable support and attention in developing novel therapies to benefit patients. The time has come to apply the same focus and resources to ensuring that patients safely receive the maximal benefits possible. PMID:21819027

  20. Clinical implementation and failure mode and effects analysis of HDR skin brachytherapy using Valencia and Leipzig surface applicators.

    PubMed

    Sayler, Elaine; Eldredge-Hindy, Harriet; Dinome, Jessie; Lockamy, Virginia; Harrison, Amy S

    2015-01-01

    The planning procedure for Valencia and Leipzig surface applicators (VLSAs) (Nucletron, Veenendaal, The Netherlands) differs substantially from CT-based planning; the unfamiliarity could lead to significant errors. This study applies failure modes and effects analysis (FMEA) to high-dose-rate (HDR) skin brachytherapy using VLSAs to ensure safety and quality. A multidisciplinary team created a protocol for HDR VLSA skin treatments and applied FMEA. Failure modes were identified and scored by severity, occurrence, and detectability. The clinical procedure was then revised to address high-scoring process nodes. Several key components were added to the protocol to minimize risk probability numbers. (1) Diagnosis, prescription, applicator selection, and setup are reviewed at weekly quality assurance rounds. Peer review reduces the likelihood of an inappropriate treatment regime. (2) A template for HDR skin treatments was established in the clinic's electronic medical record system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planner as well as increases the detectability of an error. (3) A screen check was implemented during the second check to increase detectability of an error. (4) To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display, facilitating data entry and verification. (5) VLSAs are color coded and labeled to match the electronic medical record prescriptions, simplifying in-room selection and verification. Multidisciplinary planning and FMEA increased detectability and reduced error probability during VLSA HDR brachytherapy. This clinical model may be useful to institutions implementing similar procedures. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  1. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  2. Interferometer for Measuring Displacement to Within 20 pm

    NASA Technical Reports Server (NTRS)

    Zhao, Feng

    2003-01-01

    An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers

  3. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  4. Spillover modes in multiplex games: double-edged effects on cooperation and their coevolution.

    PubMed

    Khoo, Tommy; Fu, Feng; Pauls, Scott

    2018-05-02

    In recent years, there has been growing interest in studying games on multiplex networks that account for interactions across linked social contexts. However, little is known about how potential cross-context interference, or spillover, of individual behavioural strategy impact overall cooperation. We consider three plausible spillover modes, quantifying and comparing their effects on the evolution of cooperation. In our model, social interactions take place on two network layers: repeated interactions with close neighbours in a lattice, and one-shot interactions with random individuals. Spillover can occur during the learning process with accidental cross-layer strategy transfer, or during social interactions with errors in implementation. Our analytical results, using extended pair approximation, are in good agreement with extensive simulations. We find double-edged effects of spillover: increasing the intensity of spillover can promote cooperation provided cooperation is favoured in one layer, but too much spillover is detrimental. We also discover a bistability phenomenon: spillover hinders or promotes cooperation depending on initial frequencies of cooperation in each layer. Furthermore, comparing strategy combinations emerging in each spillover mode provides good indication of their co-evolutionary dynamics with cooperation. Our results make testable predictions that inspire future research, and sheds light on human cooperation across social domains.

  5. Feedforward Equalizers for MDM-WDM in Multimode Fiber Interconnects

    NASA Astrophysics Data System (ADS)

    Masunda, Tendai; Amphawan, Angela

    2018-04-01

    In this paper, we present new tap configurations of a feedforward equalizer to mitigate mode coupling in a 60-Gbps 18-channel mode-wavelength division multiplexing system in a 2.5-km-long multimode fiber. The performance of the equalization is measured through analyses on eye diagrams, power coupling coefficients and bit-error rates.

  6. 47 CFR 80.1125 - Search and rescue coordinating communications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... station involved may impose silence on stations which interfere with that traffic. This instruction may be... “silence, m'aider”; (2) In narrow-band direct-printing telegraphy normally using forward-error correcting mode, the signal SILENCE MAYDAY. However, the ARQ mode may be used when it is advantageous to do so. (f...

  7. Methods and circuitry for reconfigurable SEU/SET tolerance

    NASA Technical Reports Server (NTRS)

    Shuler, Jr., Robert L. (Inventor)

    2010-01-01

    A device is disclosed in one embodiment that has multiple identical sets of programmable functional elements, programmable routing resources, and majority voters that correct errors. The voters accept a mode input for a redundancy mode and a split mode. In the redundancy mode, the programmable functional elements are identical and are programmed identically so the voters produce an output corresponding to the majority of inputs that agree. In a split mode, each voter selects a particular programmable functional element output as the output of the voter. Therefore, in the split mode, the programmable functional elements can perform different functions, operate independently, and/or be connected together to process different parts of the same problem.

  8. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  9. Frequency encoded auditory display of the critical tracking task

    NASA Technical Reports Server (NTRS)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  10. Locked-mode avoidance and recovery without external momentum input

    NASA Astrophysics Data System (ADS)

    Delgado-Aparicio, L.; Gates, D. A.; Wolfe, S.; Rice, J. E.; Gao, C.; Wukitch, S.; Greenwald, M.; Hughes, J.; Marmar, E.; Scott, S.

    2014-10-01

    Error-field-induced locked-modes (LMs) have been studied in C-Mod at ITER toroidal fields without NBI fueling and momentum input. The use of ICRH heating in synch with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW and a transition into H-mode when PICRH > 2 MW. The recovery experiments consisted in applying ICRH power during the LM non-rotating phase successfully unlocking the core plasma. The ``induced'' toroidal rotation was in the counter-current direction, restoring the direction and magnitude of the toroidal flow before the LM formation, but contrary to the expected Rice-scaling in the co-current direction. However, the LM occurs near the LOC/SOC transition where rotation reversals are commonly observed. Once PICRH is turned off, the core plasma ``locks'' at later times depending on the evolution of ne and Vt. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT and DE-AC02-09CH11466 at PPPL.

  11. Heralded creation of photonic qudits from parametric down-conversion using linear optics

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Jun-ichi; Bergmann, Marcel; van Loock, Peter; Fuwa, Maria; Okada, Masanori; Takase, Kan; Toyama, Takeshi; Makino, Kenzo; Takeda, Shuntaro; Furusawa, Akira

    2018-05-01

    We propose an experimental scheme to generate, in a heralded fashion, arbitrary quantum superpositions of two-mode optical states with a fixed total photon number n based on weakly squeezed two-mode squeezed state resources (obtained via weak parametric down-conversion), linear optics, and photon detection. Arbitrary d -level (qudit) states can be created this way where d =n +1 . Furthermore, we experimentally demonstrate our scheme for n =2 . The resulting qutrit states are characterized via optical homodyne tomography. We also discuss possible extensions to more than two modes concluding that, in general, our approach ceases to work in this case. For illustration and with regards to possible applications, we explicitly calculate a few examples such as NOON states and logical qubit states for quantum error correction. In particular, our approach enables one to construct bosonic qubit error-correction codes against amplitude damping (photon loss) with a typical suppression of √{n }-1 losses and spanned by two logical codewords that each correspond to an n -photon superposition for two bosonic modes.

  12. OVERVIEW: USING MODE OF ACTION AND LIFE STAGE INFORMATION TO EVALUATE THE HUMAN RELEVANCE OF ANIMAL TOXICITY DATA.

    EPA Science Inventory

    A manuscript summarizes a workshop aimed at developing a framework to determine the relevancy of animal modes-of-action for extrapolation to humans. A complete mode of action human relevance analysis - as distinct from mode of action (MOA) analysis alone - depends on robust info...

  13. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  14. Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; King, Matt; Dai, Wujiao

    2018-05-01

    Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.

  15. A Simple and Sensitive LC-MS/MS Method for Determination of Four Major Active Diterpenoids from Andrographis paniculata in Human Plasma and Its Application to a Pilot Study.

    PubMed

    Pholphana, Nanthanit; Panomvana, Duangchit; Rangkadilok, Nuchanart; Suriyo, Tawit; Ungtrakul, Teerapat; Pongpun, Wanwisa; Thaeopattha, Saichit; Satayavivad, Jutamaad

    2016-01-01

    Andrographis paniculata contains four major active diterpenoids, including andrographolide (1), 14-deoxy-11, 12-didehydroandrographolide (2), neoandrographolide (3), and 14-deoxyandrographolide (4), which exhibit differences in types and/or degrees of their pharmacological activity. Previous pharmacokinetic studies in humans reported only the parameters of compound 1 and its analytical method in human plasma. The purpose of this study was to develop a simple, sensitive, and selective liquid chromatography tandem-mass spectrometry technique for the simultaneous determination of all four major active diterpenoids in the A. paniculata product in human plasma. These four diterpenoids in plasma samples were extracted by a simple protein precipitation method with methanol and separated on a Kinetex C18 column using a gradient system with a mobile phase of acetonitrile and water. The liquid chromatography tandem-mass spectrometry was performed in the negative mode, and the multiple reaction monitoring mode was used for the quantitation. The method showed a good linearity over a wide concentration range of 2.50-500 ng/mL for 1 and over the range of 1.00-500 ng/mL for the other diterpenoids with a correlation coefficient R(2) > 0.995. The lower limit of quantification of 1 was found to be 2.50 ng/mL, while those of the other diterpenoids were 1.00 ng/mL. The intraday and interday accuracy (relative error) ranged from 0.03 % to 10.03 %, and the intraday and interday precisions (relative standard deviation) were in the range of 2.05-9.67 %. The extraction recovery (86.54-111.56 %) with a relative standard deviation of 2.78-8.61 % and the matrix effect (85.15-112.36 %) were within the acceptance criteria. Moreover, these four major active diterpenoids were stable in plasma samples at the studied storage conditions with a relative error ≤-9.79 % and a relative standard deviation ≤ 9.26 %. Hence, this present method was successfully validated and used in the pilot study to determine the pharmacokinetic parameters of all four major active diterpenoids in human plasma after multiple oral doses of the A. paniculata product were administered to a healthy, Thai female volunteer. Georg Thieme Verlag KG Stuttgart · New York.

  16. Hologic QDR 2000 whole-body scans: a comparison of three combinations of scan modes and analysis software

    NASA Technical Reports Server (NTRS)

    Spector, E.; LeBlanc, A.; Shackelford, L.

    1995-01-01

    This study reports on the short-term in vivo precision and absolute measurements of three combinations of whole-body scan modes and analysis software using a Hologic QDR 2000 dual-energy X-ray densitometer. A group of 21 normal, healthy volunteers (11 male and 10 female) were scanned six times, receiving one pencil-beam and one array whole-body scan on three occasions approximately 1 week apart. The following combinations of scan modes and analysis software were used: pencil-beam scans analyzed with Hologic's standard whole-body software (PB scans); the same pencil-beam analyzed with Hologic's newer "enhanced" software (EPB scans); and array scans analyzed with the enhanced software (EA scans). Precision values (% coefficient of variation, %CV) were calculated for whole-body and regional bone mineral content (BMC), bone mineral density (BMD), fat mass, lean mass, %fat and total mass. In general, there was no significant difference among the three scan types with respect to short-term precision of BMD and only slight differences in the precision of BMC. Precision of BMC and BMD for all three scan types was excellent: < 1% CV for whole-body values, with most regional values in the 1%-2% range. Pencil-beam scans demonstrated significantly better soft tissue precision than did array scans. Precision errors for whole-body lean mass were: 0.9% (PB), 1.1% (EPB) and 1.9% (EA). Precision errors for whole-body fat mass were: 1.7% (PB), 2.4% (EPB) and 5.6% (EA). EPB precision errors were slightly higher than PB precision errors for lean, fat and %fat measurements of all regions except the head, although these differences were significant only for the fat and % fat of the arms and legs. In addition EPB precision values exhibited greater individual variability than PB precision values. Finally, absolute values of bone and soft tissue were compared among the three combinations of scan and analysis modes. BMC, BMD, fat mass, %fat and lean mass were significantly different between PB scans and either of the EPB or EA scans. Differences were as large as 20%-25% for certain regional fat and BMD measurements. Additional work may be needed to examine the relative accuracy of the scan mode/software combinations and to identify reasons for the differences in soft tissue precision with the array whole-body scan mode.

  17. Tailoring a Human Reliability Analysis to Your Industry Needs

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2016-01-01

    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

  18. Critical error fields for locked mode instability in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R.J.; Fitzpatrick, R.; Hender, T.C.

    1992-07-01

    Otherwise stable discharges can become nonlinearly unstable to disruptive locked modes when subjected to a resonant {ital m}=2, {ital n}=1 error field from irregular poloidal field coils, as in DIII-D (Nucl. Fusion {bold 31}, 875 (1991)), or from resonant magnetic perturbation coils as in COMPASS-C ({ital Proceedings} {ital of} {ital the} 18{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Physics}, Berlin (EPS, Petit-Lancy, Switzerland, 1991), Vol. 15C, Part II, p. 61). Experiments in Ohmically heated deuterium discharges with {ital q}{approx}3.5, {ital {bar n}} {approx} 2 {times} 10{sup 19} m{sup {minus}3} andmore » {ital B}{sub {ital T}} {approx} 1.2 T show that a much larger relative error field ({ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 1 {times} 10{sup {minus}3}) is required to produce a locked mode in the small, rapidly rotating plasma of COMPASS-C ({ital R}{sub 0} = 0.56 m, {ital f}{approx}13 kHz) than in the medium-sized plasmas of DIII-D ({ital R}{sub 0} = 1.67 m, {ital f}{approx}1.6 kHz), where the critical relative error field is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}4}. This dependence of the threshold for instability is explained by a nonlinear tearing theory of the interaction of resonant magnetic perturbations with rotating plasmas that predicts the critical error field scales as ({ital fR}{sub 0}/{ital B}{sub {ital T}}){sup 4/3}{ital {bar n}}{sup 2/3}. Extrapolating from existing devices, the predicted critical field for locked modes in Ohmic discharges on the International Thermonuclear Experimental Reactor (ITER) (Nucl. Fusion {bold 30}, 1183 (1990)) ({ital f}=0.17 kHz, {ital R}{sub 0} = 6.0 m, {ital B}{sub {ital T}} = 4.9 T, {ital {bar n}} = 2 {times} 10{sup 19} m{sup {minus}3}) is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}5}.« less

  19. Software Assists in Responding to Anomalous Conditions

    NASA Technical Reports Server (NTRS)

    James, Mark; Kronbert, F.; Weiner, A.; Morgan, T.; Stroozas, B.; Girouard, F.; Hopkins, A.; Wong, L.; Kneubuhl, J.; Malina, R.

    2004-01-01

    Fault Induced Document Retrieval Officer (FIDO) is a computer program that reduces the need for a large and costly team of engineers and/or technicians to monitor the state of a spacecraft and associated ground systems and respond to anomalies. FIDO includes artificial-intelligence components that imitate the reasoning of human experts with reference to a knowledge base of rules that represent failure modes and to a database of engineering documentation. These components act together to give an unskilled operator instantaneous expert assistance and access to information that can enable resolution of most anomalies, without the need for highly paid experts. FIDO provides a system state summary (a configurable engineering summary) and documentation for diagnosis of a potentially failing component that might have caused a given error message or anomaly. FIDO also enables high-level browsing of documentation by use of an interface indexed to the particular error message. The collection of available documents includes information on operations and associated procedures, engineering problem reports, documentation of components, and engineering drawings. FIDO also affords a capability for combining information on the state of ground systems with detailed, hierarchically-organized, hypertext- enabled documentation.

  20. Objective assessment of the contribution of the RECOPESCA network to the monitoring of 3D coastal ocean variables in the Bay of Biscay and the English Channel

    NASA Astrophysics Data System (ADS)

    Lamouroux, Julien; Charria, Guillaume; De Mey, Pierre; Raynaud, Stéphane; Heyraud, Catherine; Craneguy, Philippe; Dumas, Franck; Le Hénaff, Matthieu

    2016-04-01

    In the Bay of Biscay and the English Channel, in situ observations represent a key element to monitor and to understand the wide range of processes in the coastal ocean and their direct impacts on human activities. An efficient way to measure the hydrological content of the water column over the main part of the continental shelf is to consider ships of opportunity as the surface to cover is wide and could be far from the coast. In the French observation strategy, the RECOPESCA programme, as a component of the High frequency Observation network for the environment in coastal SEAs (HOSEA), aims to collect environmental observations from sensors attached to fishing nets. In the present study, we assess that network using the Array Modes (ArM) method (a stochastic implementation of Le Hénaff et al. Ocean Dyn 59: 3-20. doi: 10.1007/s10236-008-0144-7, 2009). That model ensemble-based method is used here to compare model and observation errors and to quantitatively evaluate the performance of the observation network at detecting prior (model) uncertainties, based on hypotheses on error sources. A reference network, based on fishing vessel observations in 2008, is assessed using that method. Considering the various seasons, we show the efficiency of the network at detecting the main model uncertainties. Moreover, three scenarios, based on the reference network, a denser network in 2010 and a fictive network aggregated from a pluri-annual collection of profiles, are also analysed. Our sensitivity study shows the importance of the profile positions with respect to the sheer number of profiles for ensuring the ability of the network to describe the main error modes. More generally, we demonstrate the capacity of this method, with a low computational cost, to assess and to design new in situ observation networks.

  1. Dynamically tuned vibratory micromechanical gyroscope accelerometer

    NASA Astrophysics Data System (ADS)

    Lee, Byeungleul; Oh, Yong-Soo; Park, Kyu-Yeon; Ha, Byeoungju; Ko, Younil; Kim, Jeong-gon; Kang, Seokjin; Choi, Sangon; Song, Ci M.

    1997-11-01

    A comb driving vibratory micro-gyroscope, which utilizes the dynamically tunable resonant modes for a higher rate- sensitivity without an accelerational error, has been developed and analyzed. The surface micromachining technology is used to fabricate the gyroscope having a vibrating part of 400 X 600 micrometers with 6 mask process, and the poly-silicon structural layer is deposited by LPCVD at 625 degrees C. The gyroscope and the interface electronics housed in a hermetically sealed vacuum package for low vibrational damping condition. This gyroscope is designed to be driven in parallel to the substrate by electrostatic forces and subject to coriolis forces along vertically, with a folded beam structure. In this scheme, the resonant frequency of the driving mode is located below than that of the sensing mode, so it is possible to adjust the sensing mode with a negative stiffness effect by applying inter-plate voltage to tune the vibration modes for a higher rate-sensitivity. Unfortunately, this micromechanical vibratory gyroscope is also sensitive to vertical acceleration force, especially in the case of a low stiffness of the vibrating structure for detecting a very small coriolis force. In this study, we distinguished the rate output and the accelerational error by phase sensitivity synchronous demodulator and devised a feedback loop to maintain resonant frequency of the vertical sensing mode by varying the inter-plate tuning voltage according to the accelerational output. Therefore, this gyroscope has a high rate-sensitivity without an acceleration error, and also can be used for a resonant accelerometer. This gyroscope was tested on the rotational rate table at the separation of 50(Hz) resonant frequencies by dynamically tuning feedback loop. Also self-sustained oscillating loop is used to apply dc 2(V) + ac 30(mVpk) driving voltage to the drive electrodes. The characteristics of the gyroscope at 0.1 (deg/sec) resolution, 50 (Hz) bandwidth, and 1.3 (mV/deg/sec) sensitivity.

  2. Exact free oscillation spectra, splitting functions and the resolvability of Earth's density structure

    NASA Astrophysics Data System (ADS)

    Akbarashrafi, F.; Al-Attar, D.; Deuss, A.; Trampert, J.; Valentine, A. P.

    2018-04-01

    Seismic free oscillations, or normal modes, provide a convenient tool to calculate low-frequency seismograms in heterogeneous Earth models. A procedure called `full mode coupling' allows the seismic response of the Earth to be computed. However, in order to be theoretically exact, such calculations must involve an infinite set of modes. In practice, only a finite subset of modes can be used, introducing an error into the seismograms. By systematically increasing the number of modes beyond the highest frequency of interest in the seismograms, we investigate the convergence of full-coupling calculations. As a rule-of-thumb, it is necessary to couple modes 1-2 mHz above the highest frequency of interest, although results depend upon the details of the Earth model. This is significantly higher than has previously been assumed. Observations of free oscillations also provide important constraints on the heterogeneous structure of the Earth. Historically, this inference problem has been addressed by the measurement and interpretation of splitting functions. These can be seen as secondary data extracted from low frequency seismograms. The measurement step necessitates the calculation of synthetic seismograms, but current implementations rely on approximations referred to as self- or group-coupling and do not use fully accurate seismograms. We therefore also investigate whether a systematic error might be present in currently published splitting functions. We find no evidence for any systematic bias, but published uncertainties must be doubled to properly account for the errors due to theoretical omissions and regularization in the measurement process. Correspondingly, uncertainties in results derived from splitting functions must also be increased. As is well known, density has only a weak signal in low-frequency seismograms. Our results suggest this signal is of similar scale to the true uncertainties associated with currently published splitting functions. Thus, it seems that great care must be taken in any attempt to robustly infer details of Earth's density structure using current splitting functions.

  3. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  4. Iodine-filter-based mobile Doppler lidar to make continuous and full-azimuth-scanned wind measurements: data acquisition and analysis system, data retrieval methods, and error analysis.

    PubMed

    Wang, Zhangjun; Liu, Zhishen; Liu, Liping; Wu, Songhua; Liu, Bingyi; Li, Zhigang; Chu, Xinzhao

    2010-12-20

    An incoherent Doppler wind lidar based on iodine edge filters has been developed at the Ocean University of China for remote measurements of atmospheric wind fields. The lidar is compact enough to fit in a minivan for mobile deployment. With its sophisticated and user-friendly data acquisition and analysis system (DAAS), this lidar has made a variety of line-of-sight (LOS) wind measurements in different operational modes. Through carefully developed data retrieval procedures, various wind products are provided by the lidar, including wind profile, LOS wind velocities in plan position indicator (PPI) and range height indicator (RHI) modes, and sea surface wind. Data are processed and displayed in real time, and continuous wind measurements have been demonstrated for as many as 16 days. Full-azimuth-scanned wind measurements in PPI mode and full-elevation-scanned wind measurements in RHI mode have been achieved with this lidar. The detection range of LOS wind velocity PPI and RHI reaches 8-10 km at night and 6-8 km during daytime with range resolution of 10 m and temporal resolution of 3 min. In this paper, we introduce the DAAS architecture and describe the data retrieval methods for various operation modes. We present the measurement procedures and results of LOS wind velocities in PPI and RHI scans along with wind profiles obtained by Doppler beam swing. The sea surface wind measured for the sailing competition during the 2008 Beijing Olympics is also presented. The precision and accuracy of wind measurements are estimated through analysis of the random errors associated with photon noise and the systematic errors introduced by the assumptions made in data retrieval. The three assumptions of horizontal homogeneity of atmosphere, close-to-zero vertical wind, and uniform sensitivity are made in order to experimentally determine the zero wind ratio and the measurement sensitivity, which are important factors in LOS wind retrieval. Deviations may occur under certain meteorological conditions, leading to bias in these situations. Based on the error analyses and measurement results, we point out the application ranges of this Doppler lidar and propose several paths for future improvement.

  5. Discriminating quantum-optical beam-splitter channels with number-diagonal signal states: Applications to quantum reading and target detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nair, Ranjith

    2011-09-15

    We consider the problem of distinguishing, with minimum probability of error, two optical beam-splitter channels with unequal complex-valued reflectivities using general quantum probe states entangled over M signal and M' idler mode pairs of which the signal modes are bounced off the beam splitter while the idler modes are retained losslessly. We obtain a lower bound on the output state fidelity valid for any pure input state. We define number-diagonal signal (NDS) states to be input states whose density operator in the signal modes is diagonal in the multimode number basis. For such input states, we derive series formulas formore » the optimal error probability, the output state fidelity, and the Chernoff-type upper bounds on the error probability. For the special cases of quantum reading of a classical digital memory and target detection (for which the reflectivities are real valued), we show that for a given input signal photon probability distribution, the fidelity is minimized by the NDS states with that distribution and that for a given average total signal energy N{sub s}, the fidelity is minimized by any multimode Fock state with N{sub s} total signal photons. For reading of an ideal memory, it is shown that Fock state inputs minimize the Chernoff bound. For target detection under high-loss conditions, a no-go result showing the lack of appreciable quantum advantage over coherent state transmitters is derived. A comparison of the error probability performance for quantum reading of number state and two-mode squeezed vacuum state (or EPR state) transmitters relative to coherent state transmitters is presented for various values of the reflectances. While the nonclassical states in general perform better than the coherent state, the quantitative performance gains differ depending on the values of the reflectances. The experimental outlook for realizing nonclassical gains from number state transmitters with current technology at moderate to high values of the reflectances is argued to be good.« less

  6. A tight Cramér-Rao bound for joint parameter estimation with a pure two-mode squeezed probe

    NASA Astrophysics Data System (ADS)

    Bradshaw, Mark; Assad, Syed M.; Lam, Ping Koy

    2017-08-01

    We calculate the Holevo Cramér-Rao bound for estimation of the displacement experienced by one mode of an two-mode squeezed vacuum state with squeezing r and find that it is equal to 4 exp ⁡ (- 2 r). This equals the sum of the mean squared error obtained from a dual homodyne measurement, indicating that the bound is tight and that the dual homodyne measurement is optimal.

  7. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  8. Major strengths and weaknesses of the lod score method.

    PubMed

    Ott, J

    2001-01-01

    Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.

  9. Free vibration of multiwall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.; Ru, C. Q.; Mioduchowski, A.

    2005-06-01

    A multiple-elastic shell model is applied to systematically study free vibration of multiwall carbon nanotubes (MWNTs). Using Flugge [Stresses in Shells (Springer, Berlin, 1960)] equations of elastic shells, vibrational frequencies and associated modes are calculated for MWNTs of innermost radii 5 and 0.65 nm, respectively. The emphasis is placed on the effect of interlayer van der Waals (vdW) interaction on free vibration of MWNTs. Our results show that the interlayer vdW interaction has a crucial effect on radial (R) modes of large-radius MWNTs (e.g., of the innermost radius 5 nm), but is less pronounced for R modes of small-radius MWNTs (e.g., of the innermost radius 0.65 nm), and usually negligible for torsional (T) and longitudinal (L) modes of MWNTs. This is attributed to the fact that the interlayer vdW interaction, characterized by a radius-independent vdW interaction coefficient, depends on radial deflections only, and is dominant only for large-radius MWNTs of lower radial rigidity but less pronounced for small-radius MWNTs of much higher radial rigidity. As a result, the R modes of large-radius MWNTs are typically collective motions of almost all nested tubes, and the R modes of small-radius MWNTs, as well as the T and L modes of MWNTs, are basically vibrations of individual tubes. In particular, an approximate single-shell model is suggested to replace the multiple-shell model in calculating the lowest frequency of R mode of thin MWNTs (defined by the innermost radius-to-thickness ratio not less than 4) with relative errors less than 10%. In addition, the simplified Flugge single equation is adopted to substitute the exact Flugge equations in determining the R-mode frequencies of MWNTs with relative errors less than 10%.

  10. Evolutionary Model and Oscillation Frequencies for α Ursae Majoris: A Comparison with Observations

    NASA Astrophysics Data System (ADS)

    Guenther, D. B.; Demarque, P.; Buzasi, D.; Catanzarite, J.; Laher, R.; Conrow, T.; Kreidl, T.

    2000-02-01

    Inspired by the observations of low-amplitude oscillations of α Ursae Majoris A by Buzasi et al. using the WIRE satellite, a grid of stellar evolutionary tracks has been constructed to derive physically consistent interior models for the nearby red giant. The pulsation properties of these models were then calculated and compared with the observations. It is found that, by adopting the correct metallicity and for a normal helium abundance, only models in the mass range of 4.0-4.5 Msolar fall within the observational error box for α UMa A. This mass range is compatible, within the uncertainties, with the mass derived from the astrometric mass function. Analysis of the pulsation spectra of the models indicates that the observed α UMa oscillations can be most simply interpreted as radial (i.e., l=0) p-mode oscillations of low radial order n. The lowest frequencies observed by Buzasi et al. are compatible, within the observational errors, with model frequencies of radial orders n=0, 1, and 2 for models in the mass range of 4.0-4.5 Msolar. The higher frequencies observed can also be tentatively interpreted as higher n-valued radial p-modes, if we allow that some n-values are not presently observed. The theoretical l=1, 2, and 3 modes in the observed frequency range are g-modes with a mixed mode character, that is, with p-mode-like characteristics near the surface and g-mode-like characteristics in the interior. The calculated radial p-mode frequencies are nearly equally spaced, separated by 2-3 μHz. The nonradial modes are very densely packed throughout the observed frequency range and, even if excited to significant amplitudes at the surface, are unlikely to be resolved by the present observations.

  11. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  12. An Analysis of U.S. Army Fratricide Incidents during the Global War on Terror (11 September 2001 to 31 March 2008)

    DTIC Science & Technology

    2010-03-15

    Swiss cheese model of human error causation. ................................................................... 3  2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error

  13. Adaptive Filtration of Physiological Artifacts in EEG Signals in Humans Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, V. V.; Runnova, A. E.; Hramov, A. E.

    2018-05-01

    A new method for adaptive filtration of experimental EEG signals in humans and for removal of different physiological artifacts has been proposed. The algorithm of the method includes empirical mode decomposition of EEG, determination of the number of empirical modes that are considered, analysis of the empirical modes and search for modes that contains artifacts, removal of these modes, and reconstruction of the EEG signal. The method was tested on experimental human EEG signals and demonstrated high efficiency in the removal of different types of physiological EEG artifacts.

  14. An improved flexible telemetry system to autonomously monitor sub-bandage pressure and wound moisture.

    PubMed

    Mehmood, Nasir; Hariz, Alex; Templeton, Sue; Voelcker, Nicolas H

    2014-11-18

    This paper presents the development of an improved mobile-based telemetric dual mode sensing system to monitor pressure and moisture levels in compression bandages and dressings used for chronic wound management. The system is fabricated on a 0.2 mm thick flexible printed circuit material, and is capable of sensing pressure and moisture at two locations simultaneously within a compression bandage and wound dressing. The sensors are calibrated to sense both parameters accurately, and the data are then transmitted wirelessly to a receiver connected to a mobile device. An error-correction algorithm is developed to compensate the degradation in measurement quality due to battery power drop over time. An Android application is also implemented to automatically receive, process, and display the sensed wound parameters. The performance of the sensing system is first validated on a mannequin limb using a compression bandage and wound dressings, and then tested on a healthy volunteer to acquire real-time performance parameters. The results obtained here suggest that this dual mode sensor can perform reliably when placed on a human limb.

  15. An Improved Flexible Telemetry System to Autonomously Monitor Sub-Bandage Pressure and Wound Moisture

    PubMed Central

    Mehmood, Nasir; Hariz, Alex; Templeton, Sue; Voelcker, Nicolas H.

    2014-01-01

    This paper presents the development of an improved mobile-based telemetric dual mode sensing system to monitor pressure and moisture levels in compression bandages and dressings used for chronic wound management. The system is fabricated on a 0.2 mm thick flexible printed circuit material, and is capable of sensing pressure and moisture at two locations simultaneously within a compression bandage and wound dressing. The sensors are calibrated to sense both parameters accurately, and the data are then transmitted wirelessly to a receiver connected to a mobile device. An error-correction algorithm is developed to compensate the degradation in measurement quality due to battery power drop over time. An Android application is also implemented to automatically receive, process, and display the sensed wound parameters. The performance of the sensing system is first validated on a mannequin limb using a compression bandage and wound dressings, and then tested on a healthy volunteer to acquire real-time performance parameters. The results obtained here suggest that this dual mode sensor can perform reliably when placed on a human limb. PMID:25412216

  16. Determination of chlorpyrifos and its metabolites in cells and culture media by liquid chromatography-electrospray ionization tandem mass spectrometry.

    PubMed

    Yang, Xiangkun; Wu, Xian; Brown, Kyle A; Le, Thao; Stice, Steven L; Bartlett, Michael G

    2017-09-15

    A sensitive method to simultaneously quantitate chlorpyrifos, chlorpyrifos oxon and the detoxified product 3,5,6-trichloro-2-pyridinol (TCP) was developed using either liquid-liquid extraction for culture media samples, or protein precipitation for cell samples. Multiple reaction monitoring in positive ion mode was applied for the detection of chlorpyrifos and chlorpyrifos oxon, and selected ion recording in negative mode was applied to detect TCP. The method provided linear ranges from 5 to 500, 0.2-20 and 20-2000ng/mL for media samples and from 0.5-50, 0.02-2 and 2-200ng/million cells for CPF, CPO and TCP, respectively. The method was validated using selectivity, linearity, precision, accuracy, recovery, stability and dilution tests. All relative standard deviations (RSDs) and relative errors (REs) for QC samples were within 15% (except for LLOQ, within 20%). This method has been successfully applied to study the neurotoxicity and metabolism of chlorpyrifos in a human neuronal model. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Motion Tolerant Unfocused Imaging of Physiological Waveforms for Blood Pressure Waveform Estimation Using Ultrasound.

    PubMed

    Seo, Joohyun; Pietrangelo, Sabino J; Sodini, Charles G; Lee, Hae-Seung

    2018-05-01

    This paper details unfocused imaging using single-element ultrasound transducers for motion tolerant arterial blood pressure (ABP) waveform estimation. The ABP waveform is estimated based on pulse wave velocity and arterial pulsation through Doppler and M-mode ultrasound. This paper discusses approaches to mitigate the effect of increased clutter due to unfocused imaging on blood flow and diameter waveform estimation. An intensity reduction model (IRM) estimator is described to track the change of diameter, which outperforms a complex cross-correlation model (C3M) estimator in low contrast environments. An adaptive clutter filtering approach is also presented, which reduces the increased Doppler angle estimation error due to unfocused imaging. Experimental results in a flow phantom demonstrate that flow velocity and diameter waveforms can be reliably measured with wide lateral offsets of the transducer position. The distension waveform estimated from human carotid M-mode imaging using the IRM estimator shows physiological baseline fluctuations and 0.6-mm pulsatile diameter change on average, which is within the expected physiological range. These results show the feasibility of this low cost and portable ABP waveform estimation device.

  18. Metrological Software Test for Simulating the Method of Determining the Thermocouple Error in Situ During Operation

    NASA Astrophysics Data System (ADS)

    Chen, Jingliang; Su, Jun; Kochan, Orest; Levkiv, Mariana

    2018-04-01

    The simplified metrological software test (MST) for modeling the method of determining the thermocouple (TC) error in situ during operation is considered in the paper. The interaction between the proposed MST and a temperature measuring system is also reflected in order to study the error of determining the TC error in situ during operation. The modelling studies of the random error influence of the temperature measuring system, as well as interference magnitude (both the common and normal mode noises) on the error of determining the TC error in situ during operation using the proposed MST, have been carried out. The noise and interference of the order of 5-6 μV cause the error of about 0.2-0.3°C. It is shown that high noise immunity is essential for accurate temperature measurements using TCs.

  19. FDDI network test adaptor error injection circuit

    NASA Technical Reports Server (NTRS)

    Eckenrode, Thomas (Inventor); Stauffer, David R. (Inventor); Stempski, Rebecca (Inventor)

    1994-01-01

    An apparatus for injecting errors into a FDDI token ring network is disclosed. The error injection scheme operates by fooling a FORMAC into thinking it sent a real frame of data. This is done by using two RAM buffers. The RAM buffer normally accessed by the RBC/DPC becomes a SHADOW RAM during error injection operation. A dummy frame is loaded into the shadow RAM in order to fool the FORMAC. This data is just like the data that would be used if sending a normal frame, with the restriction that it must be shorter than the error injection data. The other buffer, the error injection RAM, contains the error injection frame. The error injection data is sent out to the media by switching a multiplexor. When the FORMAC is done transmitting the data, the multiplexor is switched back to the normal mode. Thus, the FORMAC is unaware of what happened and the token ring remains operational.

  20. A Quality Improvement Project to Decrease Human Milk Errors in the NICU.

    PubMed

    Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E

    2017-02-01

    Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.

  1. Characterization of Mode 1 and Mode 2 delamination growth and thresholds in graphite/peek composites

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.; Murri, Gretchen B.

    1988-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  2. Characterization of Mode I and Mode II delamination growth and thresholds in AS4/PEEK composites

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.; Murri, Gretchen Bostaph

    1990-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  3. Preserving flying qubit in single-mode fiber with Knill Dynamical Decoupling (KDD)

    NASA Astrophysics Data System (ADS)

    Gupta, Manish; Navarro, Erik; Moulder, Todd; Mueller, Jason; Balouchi, Ashkan; Brown, Katherine; Lee, Hwang; Dowling, Jonathan

    2015-03-01

    The implementation of information-theoretic-crypto protocol is limited by decoherence caused by the birefringence of a single-mode fiber. We propose the Knill dynamical decoupling scheme, implemented using half-wave plates, to minimize decoherence and show that a fidelity greater than 96% can be achieved even in presence of rotation error.

  4. Intelligent complementary sliding-mode control for LUSMS-based X-Y-theta motion control stage.

    PubMed

    Lin, Faa-Jeng; Chen, Syuan-Yi; Shyu, Kuo-Kai; Liu, Yen-Hung

    2010-07-01

    An intelligent complementary sliding-mode control (ICSMC) system using a recurrent wavelet-based Elman neural network (RWENN) estimator is proposed in this study to control the mover position of a linear ultrasonic motors (LUSMs)-based X-Y-theta motion control stage for the tracking of various contours. By the addition of a complementary generalized error transformation, the complementary sliding-mode control (CSMC) can efficiently reduce the guaranteed ultimate bound of the tracking error by half compared with the slidingmode control (SMC) while using the saturation function. To estimate a lumped uncertainty on-line and replace the hitting control of the CSMC directly, the RWENN estimator is adopted in the proposed ICSMC system. In the RWENN, each hidden neuron employs a different wavelet function as an activation function to improve both the convergent precision and the convergent time compared with the conventional Elman neural network (ENN). The estimation laws of the RWENN are derived using the Lyapunov stability theorem to train the network parameters on-line. A robust compensator is also proposed to confront the uncertainties including approximation error, optimal parameter vectors, and higher-order terms in Taylor series. Finally, some experimental results of various contours tracking show that the tracking performance of the ICSMC system is significantly improved compared with the SMC and CSMC systems.

  5. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: part II--Optimization of structural sensor placement.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-04-01

    The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.

  6. Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.

    PubMed

    Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente

    2014-07-15

    Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  7. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  8. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less

  9. Mode power distribution effect in white-light multimode fiber extrinsic Fabry-Perot interferometric sensor systems.

    PubMed

    Han, Ming; Wang, Anbo

    2006-05-01

    Theoretical and experimental results have shown that mode power distribution (MPD) variations could significantly vary the phase of spectral fringes from multimode fiber extrinsic Fabry-Perot interferometric (MMF-EFPI) sensor systems, owing to the fact that different modes introduce different extra phase shifts resulting from the coupling of modes reflected at the second surface to the lead-in fiber end. This dependence of fringe pattern on MPD could cause measurement errors in signal demodulation methods of white-light MMF-EFPI sensors that implement the phase information of the fringes.

  10. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    DOT National Transportation Integrated Search

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  11. Time synchronization of new-generation BDS satellites using inter-satellite link measurements

    NASA Astrophysics Data System (ADS)

    Pan, Junyang; Hu, Xiaogong; Zhou, Shanshi; Tang, Chengpan; Guo, Rui; Zhu, Lingfeng; Tang, Guifeng; Hu, Guangming

    2018-01-01

    Autonomous satellite navigation is based on the ability of a Global Navigation Satellite System (GNSS), such as Beidou, to estimate orbits and clock parameters onboard satellites using Inter-Satellite Link (ISL) measurements instead of tracking data from a ground monitoring network. This paper focuses on the time synchronization of new-generation Beidou Navigation Satellite System (BDS) satellites equipped with an ISL payload. Two modes of Ka-band ISL measurements, Time Division Multiple Access (TDMA) mode and the continuous link mode, were used onboard these BDS satellites. Using a mathematical formulation for each measurement mode along with a derivation of the satellite clock offsets, geometric ranges from the dual one-way measurements were introduced. Then, pseudoranges and clock offsets were evaluated for the new-generation BDS satellites. The evaluation shows that the ranging accuracies of TDMA ISL and the continuous link are approximately 4 cm and 1 cm (root mean square, RMS), respectively. Both lead to ISL clock offset residuals of less than 0.3 ns (RMS). For further validation, time synchronization between these satellites to a ground control station keeping the systematic time in BDT was conducted using L-band Two-way Satellite Time Frequency Transfer (TWSTFT). System errors in the ISL measurements were calibrated by comparing the derived clock offsets with the TWSTFT. The standard deviations of the estimated ISL system errors are less than 0.3 ns, and the calibrated ISL clock parameters are consistent with that of the L-band TWSTFT. For the regional BDS network, the addition of ISL measurements for medium orbit (MEO) BDS satellites increased the clock tracking coverage by more than 40% for each orbital revolution. As a result, the clock predicting error for the satellite M1S was improved from 3.59 to 0.86 ns (RMS), and the predicting error of the satellite M2S was improved from 1.94 to 0.57 ns (RMS), which is a significant improvement by a factor of 3-4.

  12. Low Power Operation of Temperature-Modulated Metal Oxide Semiconductor Gas Sensors.

    PubMed

    Burgués, Javier; Marco, Santiago

    2018-01-25

    Mobile applications based on gas sensing present new opportunities for low-cost air quality monitoring, safety, and healthcare. Metal oxide semiconductor (MOX) gas sensors represent the most prominent technology for integration into portable devices, such as smartphones and wearables. Traditionally, MOX sensors have been continuously powered to increase the stability of the sensing layer. However, continuous power is not feasible in many battery-operated applications due to power consumption limitations or the intended intermittent device operation. This work benchmarks two low-power, duty-cycling, and on-demand modes against the continuous power one. The duty-cycling mode periodically turns the sensors on and off and represents a trade-off between power consumption and stability. On-demand operation achieves the lowest power consumption by powering the sensors only while taking a measurement. Twelve thermally modulated SB-500-12 (FIS Inc. Jacksonville, FL, USA) sensors were exposed to low concentrations of carbon monoxide (0-9 ppm) with environmental conditions, such as ambient humidity (15-75% relative humidity) and temperature (21-27 °C), varying within the indicated ranges. Partial Least Squares (PLS) models were built using calibration data, and the prediction error in external validation samples was evaluated during the two weeks following calibration. We found that on-demand operation produced a deformation of the sensor conductance patterns, which led to an increase in the prediction error by almost a factor of 5 as compared to continuous operation (2.2 versus 0.45 ppm). Applying a 10% duty-cycling operation of 10-min periods reduced this prediction error to a factor of 2 (0.9 versus 0.45 ppm). The proposed duty-cycling powering scheme saved up to 90% energy as compared to the continuous operating mode. This low-power mode may be advantageous for applications that do not require continuous and periodic measurements, and which can tolerate slightly higher prediction errors.

  13. InSAR time series analysis of ALOS-2 ScanSAR data and its implications for NISAR

    NASA Astrophysics Data System (ADS)

    Liang, C.; Liu, Z.; Fielding, E. J.; Huang, M. H.; Burgmann, R.

    2017-12-01

    The JAXA's ALOS-2 mission was launched on May 24, 2014. It operates at L-band and can acquire data in multiple modes. ScanSAR is the main operational mode and has a 350 km swath, somewhat larger than the 250 km swath of the SweepSAR mode planned for the NASA-ISRO SAR (NISAR) mission. ALOS-2 has been acquiring a wealth of L-band InSAR data. These data are of particular value in areas of dense vegetation and high relief. The InSAR technical development for ALOS-2 also enables the preparation for the upcoming NISAR mission. We have been developing advanced InSAR processing techniques for ALOS-2 over the past two years. Here, we report the important issues for doing InSAR time series analysis using ALOS-2 ScanSAR data. First, we present ionospheric correction techniques for both regular ScanSAR InSAR and MAI (multiple aperture InSAR) ScanSAR InSAR. We demonstrate the large-scale ionospheric signals in the ScanSAR interferograms. They can be well mitigated by the correction techniques. Second, based on our technical development of burst-by-burst InSAR processing for ALOS-2 ScanSAR data, we find that the azimuth Frequency Modulation (FM) rate error is an important issue not only for MAI, but also for regular InSAR time series analysis. We identify phase errors caused by azimuth FM rate errors during the focusing process of ALOS-2 product. The consequence is mostly a range ramp in the InSAR time series result. This error exists in all of the time series results we have processed. We present the correction techniques for this error following a theoretical analysis. After corrections, we present high quality ALOS-2 ScanSAR InSAR time series results in a number of areas. The development for ALOS-2 can provide important implications for NISAR mission. For example, we find that in most cases the relative azimuth shift caused by ionosphere can be as large as 4 m in a large area imaged by ScanSAR. This azimuth shift is half of the 8 m azimuth resolution of the SweepSAR mode planned for NISAR, which implies that a good coregistration strategy for NISAR's SweepSAR mode is geometrical coregistration followed by MAI or spectral diversity analysis. Besides, our development also provides implications for the processing and system parameter requirements of NISAR, such as the accuracy requirement of azimuth FM rate and range timing.

  14. Concurrent remote entanglement with quantum error correction against photon losses

    NASA Astrophysics Data System (ADS)

    Roy, Ananda; Stone, A. Douglas; Jiang, Liang

    2016-09-01

    Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.

  15. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    ERIC Educational Resources Information Center

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  16. Radiation Tests on 2Gb NAND Flash Memories

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc N.; Guertin, Steven M.; Patterson, J. D.

    2006-01-01

    We report on SEE and TID tests of highly scaled Samsung 2Gbits flash memories. Both in-situ and biased interval irradiations were used to characterize the response of the total accumulated dose failures. The radiation-induced failures can be categorized as followings: single event upset (SEU) read errors in biased and unbiased modes, write errors, and single-event-functional-interrupt (SEFI) failures.

  17. Waffle mode error in the AEOS adaptive optics point-spread function

    NASA Astrophysics Data System (ADS)

    Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.

    2003-02-01

    Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.

  18. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  19. Multi-photon self-error-correction hyperentanglement distribution over arbitrary collective-noise channels

    NASA Astrophysics Data System (ADS)

    Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo

    2017-01-01

    We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.

  20. Determination of human insulin in dog plasma by a selective liquid chromatography-tandem mass spectrometry method: Application to a pharmacokinetic study.

    PubMed

    Dong, Shiqi; Zeng, Yong; Wei, Guangli; Si, Duanyun; Liu, Changxiao

    2018-03-01

    A simple, sensitive and selective LC-MS/MS method for quantitative analysis of human insulin was developed and validated in dog plasma. Insulin glargine was used as the internal standard. After a simple step of solid-phase extraction, the chromatographic separation of human insulin was achieved by using InertSustain Bio C18 column with a mobile phase of acetonitrile containing 1% formic acid (A)-water containing 1% formic acid (B). The detection was performed by positive ion electrospray ionization in multiple-reaction monitoring (MRM) mode. Good linearity was observed in the concentration range of 1-1000 μIU/mL (r 2  > 0.99), and the lower limit of quantification was 1 μIU/mL (equal to 38.46 pg/mL). The intra- and inter-day precision (expressed as relative standard deviation, RSD) of human insulin were ≤12.1% and ≤13.0%, respectively, and the accuracy (expressed as relative error, RE) was in the range of -7.23-11.9%. The recovery and matrix effect were both within acceptable limits. This method was successfully applied for the pharmacokinetic study of human insulin in dogs after subcutaneous administration. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    NASA Astrophysics Data System (ADS)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  2. Bandpass mismatch error for satellite CMB experiments I: estimating the spurious signal

    NASA Astrophysics Data System (ADS)

    Thuong Hoang, Duc; Patanchon, Guillaume; Bucher, Martin; Matsumura, Tomotake; Banerji, Ranajoy; Ishino, Hirokazu; Hazumi, Masashi; Delabrouille, Jacques

    2017-12-01

    Future Cosmic Microwave Background (CMB) satellite missions aim to use the B mode polarization to measure the tensor-to-scalar ratio r with a sensitivity σr lesssim 10-3. Achieving this goal will not only require sufficient detector array sensitivity but also unprecedented control of all systematic errors inherent in CMB polarization measurements. Since polarization measurements derive from differences between observations at different times and from different sensors, detector response mismatches introduce leakages from intensity to polarization and thus lead to a spurious B mode signal. Because the expected primordial B mode polarization signal is dwarfed by the known unpolarized intensity signal, such leakages could contribute substantially to the final error budget for measuring r. Using simulations we estimate the magnitude and angular spectrum of the spurious B mode signal resulting from bandpass mismatch between different detectors. It is assumed here that the detectors are calibrated, for example using the CMB dipole, so that their sensitivity to the primordial CMB signal has been perfectly matched. Consequently the mismatch in the frequency bandpass shape between detectors introduces differences in the relative calibration of galactic emission components. We simulate this effect using a range of scanning patterns being considered for future satellite missions. We find that the spurious contribution to r from the reionization bump on large angular scales (l < 10) is ≈ 10-3 assuming large detector arrays and 20 percent of the sky masked. We show how the amplitude of the leakage depends on the nonuniformity of the angular coverage in each pixel that results from the scan pattern.

  3. [A delayed motor production of open chains of linear strokes presented visually in static and dynamic modes: a comparison between 9 to 11 years old children and adults].

    PubMed

    Antonova, A A; Absatova, K A; Korneev, A A; Kurgansky, A V

    2015-01-01

    The production of drawing movements was studied in 29 right-handed children of 9-to-11 years old. The movements were the sequences of horizontal and vertical linear stokes conjoined at right angle (open polygonal chains) referred to throughout the paper as trajectories. The length of a trajectory varied from 4 to 6. The trajectories were presented visually to a subject in static (linedrawing) and dynamic (moving cursor that leaves no trace) modes. The subjects were asked to draw (copy) a trajectory in response to delayed go-signal (short click) as fast as possible without lifting the pen. The production latency time, the average movement duration along a trajectory segment, and overall number of errors committed by a subject during trajectory production were analyzed. A comparison of children's data with similar data in adults (16 subjects) shows the following. First, a substantial reduction in error rate is observed in the age range between 9 and 11 years old for both static and dynamic modes of trajectory presentation, with children of 11 still committing more error than adults. Second, the averaged movement duration shortens with age while the latency time tends to increase. Third, unlike the adults, the children of 9-11 do not show any difference in latency time between static and dynamic modes of visual presentation of trajectories. The difference in trajectory production between adult and children is attributed to the predominant involvement of on-line programming in children and pre-programming in adults.

  4. Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning

    PubMed Central

    Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred

    2017-01-01

    Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809

  5. Reduced-Rank Array Modes of the California Current Observing System

    NASA Astrophysics Data System (ADS)

    Moore, Andrew M.; Arango, Hernan G.; Edwards, Christopher A.

    2018-01-01

    The information content of the ocean observing array spanning the U.S. west coast is explored using the reduced-rank array modes (RAMs) derived from a four-dimensional variational (4D-Var) data assimilation system covering a period of three decades. RAMs are an extension of the original formulation of array modes introduced by Bennett (1985) but in the reduced model state-space explored by the 4D-Var system, and reveal the extent to which this space is activated by the observations. The projection of the RAMs onto the empirical orthogonal functions (EOFs) of the 4D-Var background error correlation matrix provides a quantitative measure of the effectiveness of the measurements in observing the circulation. It is found that much of the space spanned by the background error covariance is unconstrained by the present ocean observing system. The RAM spectrum is also used to introduce a new criterion to prevent 4D-Var from overfitting the model to the observations.

  6. Blade tip clearance measurement of the turbine engines based on a multi-mode fiber coupled laser ranging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Haotian; Duan, Fajie; Wu, Guoxiu

    2014-11-15

    The blade tip clearance is a parameter of great importance to guarantee the efficiency and safety of the turbine engines. In this article, a laser ranging system designed for blade tip clearance measurement is presented. Multi-mode fiber is utilized for optical transmission to guarantee that enough optical power is received by the sensor probe. The model of the tiny sensor probe is presented. The error brought by the optical path difference of different modes of the fiber is estimated and the length of the fiber is limited to reduce this error. The measurement range in which the optical power receivedmore » by the probe remains essentially unchanged is analyzed. Calibration experiments and dynamic experiments are conducted. The results of the calibration experiments indicate that the resolution of the system is about 0.02 mm and the range of the system is about 9 mm.« less

  7. Phonons in two-dimensional soft colloidal crystals.

    PubMed

    Chen, Ke; Still, Tim; Schoenholz, Samuel; Aptowicz, Kevin B; Schindler, Michael; Maggs, A C; Liu, Andrea J; Yodh, A G

    2013-08-01

    The vibrational modes of pristine and polycrystalline monolayer colloidal crystals composed of thermosensitive microgel particles are measured using video microscopy and covariance matrix analysis. At low frequencies, the Debye relation for two-dimensional harmonic crystals is observed in both crystal types; at higher frequencies, evidence for van Hove singularities in the phonon density of states is significantly smeared out by experimental noise and measurement statistics. The effects of these errors are analyzed using numerical simulations. We introduce methods to correct for these limitations, which can be applied to disordered systems as well as crystalline ones, and we show that application of the error correction procedure to the experimental data leads to more pronounced van Hove singularities in the pristine crystal. Finally, quasilocalized low-frequency modes in polycrystalline two-dimensional colloidal crystals are identified and demonstrated to correlate with structural defects such as dislocations, suggesting that quasilocalized low-frequency phonon modes may be used to identify local regions vulnerable to rearrangements in crystalline as well as amorphous solids.

  8. Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.

    PubMed

    Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues

    2012-09-01

    The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Human errors and occupational injuries of older female workers in the residential healthcare facilities for the elderly.

    PubMed

    Kim, Jun Sik; Jeong, Byung Yong

    2018-05-03

    The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.

  10. Spelling in adolescents with dyslexia: errors and modes of assessment.

    PubMed

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.

  11. Reconstruction of primordial tensor power spectra from B -mode polarization of the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Takashi; Komatsu, Eiichiro; Hazumi, Masashi; Sasaki, Misao

    2018-06-01

    Given observations of the B -mode polarization power spectrum of the cosmic microwave background (CMB), we can reconstruct power spectra of primordial tensor modes from the early Universe without assuming their functional form such as a power-law spectrum. The shape of the reconstructed spectra can then be used to probe the origin of tensor modes in a model-independent manner. We use the Fisher matrix to calculate the covariance matrix of tensor power spectra reconstructed in bins. We find that the power spectra are best reconstructed at wave numbers in the vicinity of k ≈6 ×10-4 and 5 ×10-3 Mpc-1 , which correspond to the "reionization bump" at ℓ≲6 and "recombination bump" at ℓ≈80 of the CMB B -mode power spectrum, respectively. The error bar between these two wave numbers is larger because of the lack of the signal between the reionization and recombination bumps. The error bars increase sharply toward smaller (larger) wave numbers because of the cosmic variance (CMB lensing and instrumental noise). To demonstrate the utility of the reconstructed power spectra, we investigate whether we can distinguish between various sources of tensor modes including those from the vacuum metric fluctuation and SU(2) gauge fields during single-field slow-roll inflation, open inflation, and massive gravity inflation. The results depend on the model parameters, but we find that future CMB experiments are sensitive to differences in these models. We make our calculation tool available online.

  12. [Risk and risk management in aviation].

    PubMed

    Müller, Manfred

    2004-10-01

    RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.

  13. Generation of a crowned pinion tooth surface by a surface of revolution

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Zhang, J.; Handschuh, R. F.

    1988-01-01

    A method of generating crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc sec for the numerical examples). Tooth contact analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and determine the bearing contact.

  14. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    PubMed

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  15. Relationship Between Locked Modes and Disruptions in the DIII-D Tokamak

    NASA Astrophysics Data System (ADS)

    Sweeney, Ryan

    This thesis is organized into three body chapters: (1) the first use of naturally rotating tearing modes to diagnose intrinsic error fields is presented with experimental results from the EXTRAP T2R reversed field pinch, (2) a large scale study of locked modes (LMs) with rotating precursors in the DIII-D tokamak is reported, and (3) an in depth study of LM induced thermal collapses on a few DIII-D discharges is presented. The amplitude of naturally rotating tearing modes (TMs) in EXTRAP T2R is modulated in the presence of a resonant field (given by the superposition of the resonant intrinsic error field, and, possibly, an applied, resonant magnetic perturbation (RMP)). By scanning the amplitude and phase of the RMP and observing the phase-dependent amplitude modulation of the resonant, naturally rotating TM, the corresponding resonant error field is diagnosed. A rotating TM can decelerate and lock in the laboratory frame, under the effect of an electromagnetic torque due to eddy currents induced in the wall. These locked modes often lead to a disruption, where energy and particles are lost from the equilibrium configuration on a timescale of a few to tens of milliseconds in the DIII-D tokamak. In fusion reactors, disruptions pose a problem for the longevity of the reactor. Thus, learning to predict and avoid them is important. A database was developed consisting of ˜ 2000 DIII-D discharges exhibiting TMs that lock. The database was used to study the evolution, the nonlinear effects on equilibria, and the disruptivity of locked and quasi-stationary modes with poloidal and toroidal mode numbers m = 2 and n = 1 at DIII-D. The analysis of 22,500 discharges shows that more than 18% of disruptions present signs of locked or quasi-stationary modes with rotating precursors. A parameter formulated by the plasma internal inductance li divided by the safety factor at 95% of the toroidal flux, q95, is found to exhibit predictive capability over whether a locked mode will cause a disruption or not, and does so up to hundreds of milliseconds before the disruption. Within 20 ms of the disruption, the shortest distance between the island separatrix and the unperturbed last closed flux surface, referred to as dedge, performs comparably to l i/q95 in its ability to discriminate disruptive locked modes, and it also correlates well with the duration of the locked mode. On average, and within errors, the n=1 perturbed field grows exponentially in the final 50 ms before a disruption, however, the island width cannot discern whether a LM will disrupt or not up to 20 ms before the disruption. A few discharges are selected to analyze the evolution of the electron temperature profile in the presence of multiple coexisting locked modes during partial and full thermal quenches. Partial thermal quenches are often an initial, distinct stage in the full thermal quench caused by radiation, conduction, or convection losses. Here we explore the fundamental mechanism that causes the partial quench. Near the onset of partial thermal quenches, locked islands are observed to align in a unique way, or island widths are observed to grow above a threshold. Energy analysis on one discharge suggests that about half of the energy is lost in the divertor region. In discharges with minimum values of the safety factor above ˜1.2, and with current profiles expected to be classically stable, locked modes are observed to self-stabilize by inducing a full thermal quench, possibly by double tearing modes that remove the pressure gradient across the island, thus removing the neoclassical drive.

  16. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  18. Multi-mode sliding mode control for precision linear stage based on fixed or floating stator.

    PubMed

    Fang, Jiwen; Long, Zhili; Wang, Michael Yu; Zhang, Lufan; Dai, Xufei

    2016-02-01

    This paper presents the control performance of a linear motion stage driven by Voice Coil Motor (VCM). Unlike the conventional VCM, the stator of this VCM is regulated, which means it can be adjusted as a floating-stator or fixed-stator. A Multi-Mode Sliding Mode Control (MMSMC), including a conventional Sliding Mode Control (SMC) and an Integral Sliding Mode Control (ISMC), is designed to control the linear motion stage. The control is switched between SMC and IMSC based on the error threshold. To eliminate the chattering, a smooth function is adopted instead of a signum function. The experimental results with the floating stator show that the positioning accuracy and tracking performance of the linear motion stage are improved with the MMSMC approach.

  19. Assessment of Oco-2 Target Mode Vulnerability Against Horizontal Variability of Surface Reflectivity Neglected in the Operational Forward Model

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Qu, Z.

    2014-12-01

    The main goal of NASA's OCO-2 mission is to perform XCO2 column measurements from space with an unprecedented (~1 ppm) precision and accuracy that will enable modelers to globally map CO2 sources and sinks. To achieve this goal, the mission is critically dependent on XCO2product validation that, in turn, is highly dependent on successful use of OCO-2's "target mode" data acquisition. In target mode, OCO-2 rotates in such a way that, as long as it is above the horizon, it looks at a Total Carbon Column Observing Network (TCCON) station equipped with a powerful Fourier Transform spectrometer. TCCON stations measure, among other things, XCO2by looking straight at the Sun. This translates to a far simpler forward model for TCCON than for OCO-2. In the ideal world, OCO-2's spectroscopic signals result from the cumulative gaseous absorption for one direct transmission of sunlight to the ground (like for TCCON), followed by one diffuse reflection, and one direct transmission to the instrument—at a variety of viewing angles in traget mode. In the real world, all manner of multiple surface reflections and/or scatterings contribute to the signal. See figure. In the idealized world of the OCO-2 operational forward model (used in nadir, glint and target modes), the horizontal variability of the scattering atmosphere and reflecting surface are ignored, leading to the adoption of a 1D vector radiative transfer (vRT) model. This is the source of forward model error that we are investigating, with a focus on target mode. In principle, atmospheric variability in the horizontal plane—largely due to clouds—can be avoided by careful screening. Also, it is straightforward to account for angular variability of the surface reflection model in the 1D vRT framework. But it is not clear how unavoidable horizontal variations of the surface reflectivity affects the OCO-2 signal, even if the reflection was isotropic (Lambertian). To characterize this OCO-2 "adjacency" effect, we use a simple surface variability model with a single spatial frequency in each direction, and a single albedo contrast at a time for realistic aerosol and gaseous profiles. This specific 3D RT error is compared with other documented forward model errors and translated into XCO2 error in ppm, for programatic consideration and eventual mitigation.

  20. Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2016-02-01

    The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.

  1. SU-C-BRD-02: A Team Focused Clinical Implementation and Failure Mode and Effects Analysis of HDR Skin Brachytherapy Using Valencia and Leipzig Surface Applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayler, E; Harrison, A; Eldredge-Hindy, H

    Purpose: and Leipzig applicators (VLAs) are single-channel brachytherapy surface applicators used to treat skin lesions up to 2cm diameter. Source dwell times can be calculated and entered manually after clinical set-up or ultrasound. This procedure differs dramatically from CT-based planning; the novelty and unfamiliarity could lead to severe errors. To build layers of safety and ensure quality, a multidisciplinary team created a protocol and applied Failure Modes and Effects Analysis (FMEA) to the clinical procedure for HDR VLA skin treatments. Methods: team including physicists, physicians, nurses, therapists, residents, and administration developed a clinical procedure for VLA treatment. The procedure wasmore » evaluated using FMEA. Failure modes were identified and scored by severity, occurrence, and detection. The clinical procedure was revised to address high-scoring process nodes. Results: Several key components were added to the clinical procedure to minimize risk probability numbers (RPN): -Treatments are reviewed at weekly QA rounds, where physicians discuss diagnosis, prescription, applicator selection, and set-up. Peer review reduces the likelihood of an inappropriate treatment regime. -A template for HDR skin treatments was established in the clinical EMR system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planning physicist, and increases the detectability of an error during the physics second check. -A screen check was implemented during the second check to increase detectability of an error. -To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display. This facilitates data entry and verification. -VLAs are color-coded and labeled to match the EMR prescriptions, which simplifies in-room selection and verification. Conclusion: Multidisciplinary planning and FMEA increased delectability and reduced error probability during VLA HDR Brachytherapy. This clinical model may be useful to institutions implementing similar procedures.« less

  2. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  3. To Err Is Human; To Structurally Prime from Errors Is Also Human

    ERIC Educational Resources Information Center

    Slevc, L. Robert; Ferreira, Victor S.

    2013-01-01

    Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…

  4. Human factors analysis and classification system-HFACS.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...

  5. The identification of the variation of atherosclerosis plaques by invasive and non-invasive methods

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.

    1982-01-01

    Computer-enhanced visualization of coronary arteries and lesions within them is discussed, comparing invasive and noninvasive methods. Trial design factors in computer lesions assessment are briefly discussed, and the use of the computer edge-tracking technique in that assessment is described. The results of a small pilot study conducted on serial cineangiograms of men with premature atherosclerosis are presented. A canine study to determine the feasibility of quantifying atherosclerosis from intravenous carotid angiograms is discussed. Comparative error for arterial and venous injection in the canines is determined, and the mode of processing the films to achieve better visualization is described. The application of the computer edge-tracking technique to an ultrasound image of the human carotid artery is also shown and briefly discussed.

  6. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  7. Creating a Test Validated Structural Dynamic Finite Element Model of the X-56A Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truong, Samson

    2014-01-01

    Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of the Multi Utility Technology Test-bed, X-56A aircraft, is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of the X-56A aircraft. The ground vibration test-validated structural dynamic finite element model of the X-56A aircraft is created in this study. The structural dynamic finite element model of the X-56A aircraft is improved using a model tuning tool. In this study, two different weight configurations of the X-56A aircraft have been improved in a single optimization run. Frequency and the cross-orthogonality (mode shape) matrix were the primary focus for improvement, while other properties such as center of gravity location, total weight, and offdiagonal terms of the mass orthogonality matrix were used as constraints. The end result was a more improved and desirable structural dynamic finite element model configuration for the X-56A aircraft. Improved frequencies and mode shapes in this study increased average flutter speeds of the X-56A aircraft by 7.6% compared to the baseline model.

  8. Creating a Test-Validated Finite-Element Model of the X-56A Aircraft Structure

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truong, Samson

    2014-01-01

    Small modeling errors in a finite-element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of the X-56A Multi-Utility Technology Testbed aircraft is the flight demonstration of active flutter suppression and, therefore, in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of the X-56A aircraft. The ground-vibration test-validated structural dynamic finite-element model of the X-56A aircraft is created in this study. The structural dynamic finite-element model of the X-56A aircraft is improved using a model-tuning tool. In this study, two different weight configurations of the X-56A aircraft have been improved in a single optimization run. Frequency and the cross-orthogonality (mode shape) matrix were the primary focus for improvement, whereas other properties such as c.g. location, total weight, and off-diagonal terms of the mass orthogonality matrix were used as constraints. The end result was an improved structural dynamic finite-element model configuration for the X-56A aircraft. Improved frequencies and mode shapes in this study increased average flutter speeds of the X-56A aircraft by 7.6% compared to the baseline model.

  9. Experimental validation of A-mode ultrasound acquisition system for computer assisted orthopaedic surgery

    NASA Astrophysics Data System (ADS)

    De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo

    2009-02-01

    Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).

  10. W-Band Circularly Polarized TE11 Mode Transducer

    NASA Astrophysics Data System (ADS)

    Zhan, Mingzhou; He, Wangdong; Wang, Lei

    2018-06-01

    This paper presents a balanced sidewall exciting approach to realize the circularly polarized TE11 mode transducer. We used a voltage vector transfer matrix to establish the relationship between input and output vectors, then we analyzed amplitude and phase errors to estimate the isolation of degenerate mode. A mode transducer with a sidewall exciter was designed based on the results. In the 88-100 GHz frequency range, the simulated axial ratio is less than 1.05 and the isolation of linearly polarization TE11 mode is higher than 30 dBc. In back-to-back measurements, the return loss is generally greater than 20 dB with a typical insertion loss of 1.2 dB. Back-to-back transmission measurements are in excellent agreement with simulations.

  11. W-Band Circularly Polarized TE11 Mode Transducer

    NASA Astrophysics Data System (ADS)

    Zhan, Mingzhou; He, Wangdong; Wang, Lei

    2018-04-01

    This paper presents a balanced sidewall exciting approach to realize the circularly polarized TE11 mode transducer. We used a voltage vector transfer matrix to establish the relationship between input and output vectors, then we analyzed amplitude and phase errors to estimate the isolation of degenerate mode. A mode transducer with a sidewall exciter was designed based on the results. In the 88-100 GHz frequency range, the simulated axial ratio is less than 1.05 and the isolation of linearly polarization TE11 mode is higher than 30 dBc. In back-to-back measurements, the return loss is generally greater than 20 dB with a typical insertion loss of 1.2 dB. Back-to-back transmission measurements are in excellent agreement with simulations.

  12. Fine-particle pH for Beijing winter haze as inferred from different thermodynamic equilibrium models

    NASA Astrophysics Data System (ADS)

    Song, Shaojie; Gao, Meng; Xu, Weiqi; Shao, Jingyuan; Shi, Guoliang; Wang, Shuxiao; Wang, Yuxuan; Sun, Yele; McElroy, Michael B.

    2018-05-01

    pH is an important property of aerosol particles but is difficult to measure directly. Several studies have estimated the pH values for fine particles in northern China winter haze using thermodynamic models (i.e., E-AIM and ISORROPIA) and ambient measurements. The reported pH values differ widely, ranging from close to 0 (highly acidic) to as high as 7 (neutral). In order to understand the reason for this discrepancy, we calculated pH values using these models with different assumptions with regard to model inputs and particle phase states. We find that the large discrepancy is due primarily to differences in the model assumptions adopted in previous studies. Calculations using only aerosol-phase composition as inputs (i.e., reverse mode) are sensitive to the measurement errors of ionic species, and inferred pH values exhibit a bimodal distribution, with peaks between -2 and 2 and between 7 and 10, depending on whether anions or cations are in excess. Calculations using total (gas plus aerosol phase) measurements as inputs (i.e., forward mode) are affected much less by these measurement errors. In future studies, the reverse mode should be avoided whereas the forward mode should be used. Forward-mode calculations in this and previous studies collectively indicate a moderately acidic condition (pH from about 4 to about 5) for fine particles in northern China winter haze, indicating further that ammonia plays an important role in determining this property. The assumed particle phase state, either stable (solid plus liquid) or metastable (only liquid), does not significantly impact pH predictions. The unrealistic pH values of about 7 in a few previous studies (using the standard ISORROPIA model and stable state assumption) resulted from coding errors in the model, which have been identified and fixed in this study.

  13. Search for Long Period Solar Normal Modes in Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Caton, R.; Pavlis, G. L.

    2016-12-01

    We search for evidence of solar free oscillations (normal modes) in long period seismic data through multitaper spectral analysis of array stacks. This analysis is similar to that of Thomson & Vernon (2015), who used data from the most quiet single stations of the global seismic network. Our approach is to use stacks of large arrays of noisier stations to reduce noise. Arrays have the added advantage of permitting the use of nonparametic statistics (jackknife errors) to provide objective error estimates. We used data from the Transportable Array, the broadband borehole array at Pinyon Flat, and the 3D broadband array in Homestake Mine in Lead, SD. The Homestake Mine array has 15 STS-2 sensors deployed in the mine that are extremely quiet at long periods due to stable temperatures and stable piers anchored to hard rock. The length of time series used ranged from 50 days to 85 days. We processed the data by low-pass filtering with a corner frequency of 10 mHz, followed by an autoregressive prewhitening filter and median stack. We elected to use the median instead of the mean in order to get a more robust stack. We then used G. Prieto's mtspec library to compute multitaper spectrum estimates on the data. We produce delete-one jackknife error estimates of the uncertainty at each frequency by computing median stacks of all data with one station removed. The results from the TA data show tentative evidence for several lines between 290 μHz and 400 μHz, including a recurring line near 379 μHz. This 379 μHz line is near the Earth mode 0T2 and the solar mode 5g5, suggesting that 5g5 could be coupling into the Earth mode. Current results suggest more statistically significant lines may be present in Pinyon Flat data, but additional processing of the data is underway to confirm this observation.

  14. Orthogonal control of the frequency comb dynamics of a mode-locked laser diode.

    PubMed

    Holman, Kevin W; Jones, David J; Ye, Jun; Ippen, Erich P

    2003-12-01

    We have performed detailed studies on the dynamics of a frequency comb produced by a mode-locked laser diode (MLLD). Orthogonal control of the pulse repetition rate and the pulse-to-pulse carrier-envelope phase slippage is achieved by appropriate combinations of the respective error signals to actuate the diode injection current and the saturable absorber bias voltage. Phase coherence is established between the MLLD at 1550 nm and a 775-nm mode-locked Ti:sapphire laser working as part of an optical atomic clock.

  15. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.

    PubMed

    Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio

    2018-06-01

    Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Feedback attitude sliding mode regulation control of spacecraft using arm motion

    NASA Astrophysics Data System (ADS)

    Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu

    2013-09-01

    The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.

  17. Using integrated models to minimize environmentally induced wavefront error in optomechanical design and analysis

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  18. Bit-error rate for free-space adaptive optics laser communications.

    PubMed

    Tyson, Robert K

    2002-04-01

    An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.

  19. Performance of the Keck Observatory adaptive-optics system.

    PubMed

    van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A

    2004-10-10

    The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.

  20. Derivation of respiration rate from ambulatory ECG and PPG using Ensemble Empirical Mode Decomposition: Comparison and fusion.

    PubMed

    Orphanidou, Christina

    2017-02-01

    A new method for extracting the respiratory rate from ECG and PPG obtained via wearable sensors is presented. The proposed technique employs Ensemble Empirical Mode Decomposition in order to identify the respiration "mode" from the noise-corrupted Heart Rate Variability/Pulse Rate Variability and Amplitude Modulation signals extracted from ECG and PPG signals. The technique was validated with respect to a Respiratory Impedance Pneumography (RIP) signal using the mean absolute and the average relative errors for a group ambulatory hospital patients. We compared approaches using single respiration-induced modulations on the ECG and PPG signals with approaches fusing the different modulations. Additionally, we investigated whether the presence of both the simultaneously recorded ECG and PPG signals provided a benefit in the overall system performance. Our method outperformed state-of-the-art ECG- and PPG-based algorithms and gave the best results over the whole database with a mean error of 1.8bpm for 1min estimates when using the fused ECG modulations, which was a relative error of 10.3%. No statistically significant differences were found when comparing the ECG-, PPG- and ECG/PPG-based approaches, indicating that the PPG can be used as a valid alternative to the ECG for applications using wearable sensors. While the presence of both the ECG and PPG signals did not provide an improvement in the estimation error, it increased the proportion of windows for which an estimate was obtained by at least 9%, indicating that the use of two simultaneously recorded signals might be desirable in high-acuity cases where an RR estimate is required more frequently. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  2. Measurement of Branching Ratios for Non-leptonic Cabibbo-suppressed Decays of the Charmed-Strange Baryon Ξ c +

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vazquez Jauregui, Eric

    2008-08-01

    We studied several Ξ c + decay modes, most of them with a hyperon in the final state, and determined their branching ratios. The data used in this analysis come from the fixed target experiment SELEX, a multi-stage spectrometer with high acceptance for forward interactions, that took data during 1996 and 1997 at Fermilab with 600 GeV=c (mainly Σ -, π -) and 540 GeV/c (mainly p) beams incident on copper and carbon targets. The thesis mainly details the first observation of two Cabibbo-suppressed decay modes, Ξ c + → Σ +π -π + and Ξ c + → Σmore » -π +π +. The branching ratios of the decays relative to the Cabibbo-favored Ξ c + → Σ -π +π + are measured to be: Γ(Ξ c + → Σ -π +π +)/Γ(Ξ c + → Ξ -π +π +) = 0.184 ± 0.086. Systematic studies have been performed in order to check the stability of the measurements varying all cuts used in the selection of events over a wide interval and we do not observe evidence of any trend, so the systematic error is negligible in the final results because the quadrature sum of the total error is not affected. The branching ratios for the same decay modes of the Λ c + are measured to check the methodology of the analysis. The branching ratio of the decay mode Λ c + → Σ +π -π + is measured relative to Λ c + → pK - π +, while the one of the decay mode Λ c + → Σ -π +π +is relative to Λ c +→ Σ +π -π +, as they have been reported earlier. The results for the control modes are: Γ(Λ c +→ Σ +π -π +)/Γ(Λ c + → pK - π +) = 0.716 ± 0.144 and Γ(Λ c +→ Σ -π +π +)/Γ(Λ c + → Σ +π -π +) = 0.382 ± 0.104. The branching ratio of the decay mode Ξ c + → pK - π + relative to Ξ c + → Ξ -π +π + is considered as another control mode, the measured value is Γ(Ξ c + → pK -π +)/Γ(Ξ c + → Ξ -π +π +) = 0.194 ± 0.054. Systematic studies have been also performed for the control modes and all systematic variations are also small compared to the statistical error. We also report the first observation of two more decay modes, the Cabibbo-suppressed decay Ξ c + → Σ - K +π + and the doubly Cabibbo-suppressed decay Ξ c + → Σ +K +π -, but their branching ratios have not been measured up to now.« less

  3. Lost in Translation: the Case for Integrated Testing

    NASA Technical Reports Server (NTRS)

    Young, Aaron

    2017-01-01

    The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.

  4. Selective excitation of LG 00, LG 01, and LG 02 modes by a solid core PCF based mode selector in MDM-Ro-FSO transmission systems

    NASA Astrophysics Data System (ADS)

    Chaudhary, Sushank; Amphawan, Angela

    2018-07-01

    Radio over free space (Ro-FSO) provides an ambitious platform for seamless integration of radio networks to optical networks. Three independent channels, each carrying 2.5 Gbps–5 GHz data, are successfully transmitted over a free space link of 2.5 km by using mode division multiplexing (MDM) of three modes LG 00, LG 01, and LG 02 modes in conjunction with solid core photonic crystal fibers (SC-PCFs). Moreover, SC-PCFs are used as a mode selector in the proposed MDM-Ro-FSO system. The results are reported in terms of bit error rate, mode spectrum, and spatial profiles. The performance of the proposed Ro-FSO system is also evaluated under the influence of atmospheric turbulence in the form of different levels of fog, namely, light fog, thin fog, and heavy fog.

  5. Human factors in aircraft incidents - Results of a 7-year study (Andre Allard Memorial Lecture)

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Reynard, W. D.

    1984-01-01

    It is pointed out that nearly all fatal aircraft accidents are preventable, and that most such accidents are due to human error. The present discussion is concerned with the results of a seven-year study of the data collected by the NASA Aviation Safety Reporting System (ASRS). The Aviation Safety Reporting System was designed to stimulate as large a flow as possible of information regarding errors and operational problems in the conduct of air operations. It was implemented in April, 1976. In the following 7.5 years, 35,000 reports have been received from pilots, controllers, and the armed forces. Human errors are found in more than 80 percent of these reports. Attention is given to the types of events reported, possible causal factors in incidents, the relationship of incidents and accidents, and sources of error in the data. ASRS reports include sufficient detail to permit authorities to institute changes in the national aviation system designed to minimize the likelihood of human error, and to insulate the system against the effects of errors.

  6. Human factors in surgery: from Three Mile Island to the operating room.

    PubMed

    D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco

    2009-01-01

    Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.

  7. Excitation of transverse dipole and quadrupole modes in a pure ion plasma in a linear Paul trap to study collective processes in intense beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.

    Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less

  8. Dynamic performance of MEMS deformable mirrors for use in an active/adaptive two-photon microscope

    NASA Astrophysics Data System (ADS)

    Zhang, Christian C.; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.

    2016-03-01

    Active optics can facilitate two-photon microscopic imaging deep in tissue. We are investigating fast focus control mirrors used in concert with an aberration correction mirror to control the axial position of focus and system aberrations dynamically during scanning. With an adaptive training step, sample-induced aberrations may be compensated as well. If sufficiently fast and precise, active optics may be able to compensate under-corrected imaging optics as well as sample aberrations to maintain diffraction-limited performance throughout the field of view. Toward this end we have measured a Boston Micromachines Corporation Multi-DM 140 element deformable mirror, and a Revibro Optics electrostatic 4-zone focus control mirror to characterize dynamic performance. Tests for the Multi-DM included both step response and sinusoidal frequency sweeps of specific Zernike modes. For the step response we measured 10%-90% rise times for the target Zernike amplitude, and wavefront rms error settling times. Frequency sweeps identified the 3dB bandwidth of the mirror when attempting to follow a sinusoidal amplitude trajectory for a specific Zernike mode. For five tested Zernike modes (defocus, spherical aberration, coma, astigmatism and trefoil) we find error settling times for mode amplitudes up to 400nm to be less than 52 us, and 3 dB frequencies range from 6.5 kHz to 10 kHz. The Revibro Optics mirror was tested for step response only, with error settling time of 80 μs for a large 3 um defocus step, and settling time of only 18 μs for a 400nm spherical aberration step. These response speeds are sufficient for intra-scan correction at scan rates typical of two-photon microscopy.

  9. Random safety auditing, root cause analysis, failure mode and effects analysis.

    PubMed

    Ursprung, Robert; Gray, James

    2010-03-01

    Improving quality and safety in health care is a major concern for health care providers, the general public, and policy makers. Errors and quality issues are leading causes of morbidity and mortality across the health care industry. There is evidence that patients in the neonatal intensive care unit (NICU) are at high risk for serious medical errors. To facilitate compliance with safe practices, many institutions have established quality-assurance monitoring procedures. Three techniques that have been found useful in the health care setting are failure mode and effects analysis, root cause analysis, and random safety auditing. When used together, these techniques are effective tools for system analysis and redesign focused on providing safe delivery of care in the complex NICU system. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Schrodinger's catapult II: entanglement between stationary and flying fields

    NASA Astrophysics Data System (ADS)

    Pfaff, W.; Axline, C.; Burkhart, L.; Vool, U.; Reinhold, P.; Frunzio, L.; Jiang, L.; Devoret, M.; Schoelkopf, R.

    Entanglement between nodes is an elementary resource in a quantum network. An important step towards its realization is entanglement between stationary and flying states. Here we experimentally demonstrate entanglement generation between a long-lived cavity memory and traveling mode in circuit QED. A large on/off ratio and fast control over a parametric mixing process allow us to realize conversion with tunable magnitude and duration between standing and flying mode. In the case of half-conversion, we observe correlations between the standing and flying state that confirm the generation of entangled states. We show this for both single-photon and multi-photon states, paving the way for error-correctable remote entanglement. Our system could serve as an essential component in a modular architecture for error-protected quantum information processing.

  11. Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy.

    PubMed

    Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E

    2015-06-01

    Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.

  12. Characterization of near-stoichiometric Ti:LiNbO(3) strip waveguides with varied substrate refractive index in the guiding layer.

    PubMed

    Zhang, De-Long; Zhang, Pei; Zhou, Hao-Jiang; Pun, Edwin Yue-Bun

    2008-10-01

    We have demonstrated the possibility that near-stoichiometric Ti:LiNbO(3) strip waveguides are fabricated by carrying out vapor transport equilibration at 1060 degrees C for 12 h on a congruent LiNbO(3) substrate with photolithographically patterned 4-8 microm wide, 115 nm thick Ti strips. Optical characterizations show that these waveguides are single mode at 1.5 microm and show a waveguide loss of 1.3 dB/cm for TM mode and 1.1 dB/cm for TE mode. In the width/depth direction of the waveguide, the mode field follows the Gauss/Hermite-Gauss function. Secondary-ion-mass spectrometry (SIMS) was used to study Ti-concentration profiles in the depth direction and on the surface of the 6 microm wide waveguide. The result shows that the Ti profile follows a sum of two error functions along the width direction and a complementary error function in the depth direction. The surface Ti concentration, 1/e width and depth, and mean diffusivities along the width and depth directions of the guide are similar to 3.0 x 10(21) cm(-3), 3.8 microm, 2.6 microm, 0.30 and 0.14 microm(2)/h, respectively. Micro-Raman analysis was carried out on the waveguide endface to characterize the depth profile of Li composition in the guiding layer. The results show that the depth profile of Li composition also follows a complementary error function with a 1/e depth of 3.64 microm. The mean ([Li(Li)]+[Ti(Li)])/([Nb(Nb)]+[Ti(Nb)]) ratio in the waveguide layer is about 0.98. The inhomogeneous Li-composition profile results in a varied substrate index in the guiding layer. A two-dimensional refractive index profile model in the waveguide is proposed by taking into consideration the varied substrate index and assuming linearity between Ti-induced index change and Ti concentration. The net waveguide surface index increments at 1545 nm are 0.0114 and 0.0212 for ordinary and extraordinary rays, respectively. Based upon the constructed index model, the fundamental mode field profile was calculated using the beam propagation method, and the mode sizes and effective index versus the Ti-strip width were calculated for three lower TM and TE modes using the variational method. An agreement between theory and experiment is obtained.

  13. The Human Factors Analysis and Classification System : HFACS : final report.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...

  14. Projections of Southern Hemisphere atmospheric circulation interannual variability

    NASA Astrophysics Data System (ADS)

    Grainger, Simon; Frederiksen, Carsten S.; Zheng, Xiaogu

    2017-02-01

    An analysis is made of the coherent patterns, or modes, of interannual variability of Southern Hemisphere 500 hPa geopotential height field under current and projected climate change scenarios. Using three separate multi-model ensembles (MMEs) of coupled model intercomparison project phase 5 (CMIP5) models, the interannual variability of the seasonal mean is separated into components related to (1) intraseasonal processes; (2) slowly-varying internal dynamics; and (3) the slowly-varying response to external changes in radiative forcing. In the CMIP5 RCP8.5 and RCP4.5 experiments, there is very little change in the twenty-first century in the intraseasonal component modes, related to the Southern annular mode (SAM) and mid-latitude wave processes. The leading three slowly-varying internal component modes are related to SAM, the El Niño-Southern oscillation (ENSO), and the South Pacific wave (SPW). Structural changes in the slow-internal SAM and ENSO modes do not exceed a qualitative estimate of the spatial sampling error, but there is a consistent increase in the ENSO-related variance. Changes in the SPW mode exceed the sampling error threshold, but cannot be further attributed. Changes in the dominant slowly-varying external mode are related to projected changes in radiative forcing. They reflect thermal expansion of the tropical troposphere and associated changes in the Hadley Cell circulation. Changes in the externally-forced associated variance in the RCP8.5 experiment are an order of magnitude greater than for the internal components, indicating that the SH seasonal mean circulation will be even more dominated by a SAM-like annular structure. Across the three MMEs, there is convergence in the projected response in the slow-external component.

  15. Differential reliance of chimpanzees and humans on automatic and deliberate control of motor actions.

    PubMed

    Kaneko, Takaaki; Tomonaga, Masaki

    2014-06-01

    Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Distinct sets of locomotor modules control the speed and modes of human locomotion

    PubMed Central

    Yokoyama, Hikaru; Ogawa, Tetsuya; Kawashima, Noritaka; Shinya, Masahiro; Nakazawa, Kimitaka

    2016-01-01

    Although recent vertebrate studies have revealed that different spinal networks are recruited in locomotor mode- and speed-dependent manners, it is unknown whether humans share similar neural mechanisms. Here, we tested whether speed- and mode-dependence in the recruitment of human locomotor networks exists or not by statistically extracting locomotor networks. From electromyographic activity during walking and running over a wide speed range, locomotor modules generating basic patterns of muscle activities were extracted using non-negative matrix factorization. The results showed that the number of modules changed depending on the modes and speeds. Different combinations of modules were extracted during walking and running, and at different speeds even during the same locomotor mode. These results strongly suggest that, in humans, different spinal locomotor networks are recruited while walking and running, and even in the same locomotor mode different networks are probably recruited at different speeds. PMID:27805015

  17. Engineering evaluations and studies. Report for IUS studies

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The reviews, investigations, and analyses of the Inertial Upper Stage (IUS) Spacecraft Tracking and Data Network (STDN) transponder are reviewed. Carrier lock detector performance for Tracking and Data Relay Satellite System (TDRSS) dual-mode operation is discussed, as is the problem of predicting instantaneous frequency error in the carrier loop. Coastal loop performance analysis is critiqued and the static tracking phase error induced by thermal noise biases is discussed.

  18. Assessing Gaussian Assumption of PMU Measurement Error Using Field Data

    DOE PAGES

    Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...

    2017-10-13

    Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.

  19. Influence of model errors in optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  20. Field-Line Localized Destabilization of Ballooning Modes in Three-Dimensional Tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willensdorfer, M.; Cote, T. B.; Hegna, C. C.

    2017-08-25

    Field-line localized ballooning modes have been observed at the edge of high confinement mode plasmas in ASDEX Upgrade with rotating 3D perturbations induced by an externally applied n ¼ 2 error field and during a moderate level of edge localized mode mitigation. The observed ballooning modes are localized to the field lines which experience one of the two zero crossings of the radial flux surface displacement during one rotation period. The localization of the ballooning modes agrees very well with the localization of the largest growth rates from infinite-n ideal ballooning stability calculations using a realistic 3D ideal magnetohydrodynamic equilibrium.more » This analysis predicts a lower stability with respect to the axisymmetric case. The primary mechanism for the local lower stability is the 3D distortion of the local magnetic shear.« less

  1. Robust manipulation of light using topologically protected plasmonic modes.

    PubMed

    Liu, Chenxu; Gurudev Dutt, M V; Pekker, David

    2018-02-05

    We propose using a topological plasmonic crystal structure composed of an array of nearly parallel nanowires with unequal spacing for manipulating light. In the paraxial approximation, the Helmholtz equation that describes the propagation of light along the nanowires maps onto the Schrödinger equation of the Su-Schrieffer-Heeger (SSH) model. Using a full three-dimensional finite difference time domain solution of the Maxwell equations, we verify the existence of topological defect modes, with sub-wavelength localization, bound to domain walls of the plasmonic crystal. We show that by manipulating domain walls we can construct spatial mode filters that couple bulk modes to topological defect modes, and topological beam-splitters that couple two topological defect modes. Finally, we show that the structures are tolerant to fabrication errors with an inverse length-scale smaller than the topological band gap.

  2. On-chip broadband silicon thermo-optic 2☓2 four-mode optical switch for optical space and local mode switching.

    PubMed

    Zhou, Ting; Jia, Hao; Ding, Jianfeng; Zhang, Lei; Fu, Xin; Yang, Lin

    2018-04-02

    We present a silicon thermo-optic 2☓2 four-mode optical switch optimized for optical space switching plus local optical mode switching. Four asymmetric directional couplers are utilized for mode multiplexing and de-multiplexing. Sixteen 2☓2 single-mode optical switches based on balanced thermally tunable Mach-Zehnder interferometers are exploited for switching function. The measured insertion losses are 8.0~12.2 dB and the optical signal-to-noise ratios are larger than 11.2 dB in the wavelength range of 1525~1565 nm. The optical links in "all-bar" and "all-cross" states exhibit less than 2.0 dB and 1.4 dB power penalties respectively below 10 -9 bit error rates for 40 Gbps data transmission.

  3. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  4. Effect of satellite formations and imaging modes on global albedo estimation

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.

    2016-05-01

    We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.

  5. Switching from reaching to navigation: differential cognitive strategies for spatial memory in children and adults.

    PubMed

    Belmonti, Vittorio; Cioni, Giovanni; Berthoz, Alain

    2015-07-01

    Navigational and reaching spaces are known to involve different cognitive strategies and brain networks, whose development in humans is still debated. In fact, high-level spatial processing, including allocentric location encoding, is already available to very young children, but navigational strategies are not mature until late childhood. The Magic Carpet (MC) is a new electronic device translating the traditional Corsi Block-tapping Test (CBT) to navigational space. In this study, the MC and the CBT were used to assess spatial memory for navigation and for reaching, respectively. Our hypothesis was that school-age children would not treat MC stimuli as navigational paths, assimilating them to reaching sequences. Ninety-one healthy children aged 6 to 11 years and 18 adults were enrolled. Overall short-term memory performance (span) on both tests, effects of sequence geometry, and error patterns according to a new classification were studied. Span increased with age on both tests, but relatively more in navigational than in reaching space, particularly in males. Sequence geometry specifically influenced navigation, not reaching. The number of body rotations along the path affected MC performance in children more than in adults, and in women more than in men. Error patterns indicated that navigational sequences were increasingly retained as global paths across development, in contrast to separately stored reaching locations. A sequence of spatial locations can be coded as a navigational path only if a cognitive switch from a reaching mode to a navigation mode occurs. This implies the integration of egocentric and allocentric reference frames, of visual and idiothetic cues, and access to long-term memory. This switch is not yet fulfilled at school age due to immature executive functions. © 2014 John Wiley & Sons Ltd.

  6. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.

  7. Overview of Initial NSTX-U Experimental Operations

    NASA Astrophysics Data System (ADS)

    Battaglia, Devon; the NSTX-U Team

    2016-10-01

    Initial operation of the National Spherical Torus Experiment Upgrade (NSTX-U) has satisfied a number of commissioning milestones, including demonstration of discharges that exceed the field and pulse length of NSTX. ELMy H-mode operation at the no-wall βN limit is obtained with Boronized wall conditioning. Peak H-mode parameters include: Ip = 1 MA, BT0 = 0.63 T, WMHD = 330 kJ, βN = 4, βN/li = 6, κ = 2.3, τE , tot >50 ms. Access to high-performance H-mode scenarios with long MHD-quiescent periods is enabled by the resilient timing of the L-H transition via feedback control of the diverting time and shape, and correction of the dominant n =1 error fields during the Ip ramp. Stationary L-mode discharges have been realized up to 1 MA with 2 s discharges achieved at Ip = 650 kA. The long-pulse L-mode discharges enabled by the new central solenoid supported initial experiments on error field measurements and correction, plasma shape control, controlled discharge ramp-down, L-mode transport and fast ion physics. Increased off-axis current drive and reduction of fast ion instabilities has been observed with the new, more tangential neutral beamline. The initial results support that access to increased field, current and heating at low-aspect-ratio expands the regimes available to develop scenarios, diagnostics and predictive models that inform the design and optimization of future burning plasma tokamak devices, including ITER. Work Supported by U.S. DOE Contract No. DE-AC02-09CH11466.

  8. Investigating System Dependability Modeling Using AADL

    NASA Technical Reports Server (NTRS)

    Hall, Brendan; Driscoll, Kevin R.; Madl, Gabor

    2013-01-01

    This report describes Architecture Analysis & Design Language (AADL) models for a diverse set of fault-tolerant, embedded data networks and describes the methods and tools used to created these models. It also includes error models per the AADL Error Annex. Some networks were modeled using Error Detection Isolation Containment Types (EDICT). This report gives a brief description for each of the networks, a description of its modeling, the model itself, and evaluations of the tools used for creating the models. The methodology includes a naming convention that supports a systematic way to enumerate all of the potential failure modes.

  9. An Alternate Method for Estimating Dynamic Height from XBT Profiles Using Empirical Vertical Modes

    NASA Technical Reports Server (NTRS)

    Lagerloef, Gary S. E.

    1994-01-01

    A technique is presented that applies modal decomposition to estimate dynamic height (0-450 db) from Expendable BathyThermograph (XBT) temperature profiles. Salinity-Temperature-Depth (STD) data are used to establish empirical relationships between vertically integrated temperature profiles and empirical dynamic height modes. These are then applied to XBT data to estimate dynamic height. A standard error of 0.028 dynamic meters is obtained for the waters of the Gulf of Alaska- an ocean region subject to substantial freshwater buoyancy forcing and with a T-S relationship that has considerable scatter. The residual error is a substantial improvement relative to the conventional T-S correlation technique when applied to this region. Systematic errors between estimated and true dynamic height were evaluated. The 20-year-long time series at Ocean Station P (50 deg N, 145 deg W) indicated weak variations in the error interannually, but not seasonally. There were no evident systematic alongshore variations in the error in the ocean boundary current regime near the perimeter of the Alaska gyre. The results prove satisfactory for the purpose of this work, which is to generate dynamic height from XBT data for coanalysis with satellite altimeter data, given that the altimeter height precision is likewise on the order of 2-3 cm. While the technique has not been applied to other ocean regions where the T-S relation has less scatter, it is suggested that it could provide some improvement over previously applied methods, as well.

  10. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  11. Coal gasification system with a modulated on/off control system

    DOEpatents

    Fasching, George E.

    1984-01-01

    A modulated control system is provided for improving regulation of the bed level in a fixed-bed coal gasifier into which coal is fed from a rotary coal feeder. A nuclear bed level gauge using a cobalt source and an ion chamber detector is used to detect the coal bed level in the gasifier. The detector signal is compared to a bed level set point signal in a primary controller which operates in proportional/integral modes to produce an error signal. The error signal is modulated by the injection of a triangular wave signal of a frequency of about 0.0004 Hz and an amplitude of about 80% of the primary deadband. The modulated error signal is fed to a triple-deadband secondary controller which jogs the coal feeder speed up or down by on/off control of a feeder speed change driver such that the gasifier bed level is driven toward the set point while preventing excessive cycling (oscillation) common in on/off mode automatic controllers of this type. Regulation of the bed level is achieved without excessive feeder speed control jogging.

  12. Spontaneous default mode network phase-locking moderates performance perceptions under stereotype threat

    PubMed Central

    Leitner, Jordan B.; Duran-Jordan, Kelly; Magerman, Adam B.; Schmader, Toni; Allen, John J. B.

    2015-01-01

    This study assessed whether individual differences in self-oriented neural processing were associated with performance perceptions of minority students under stereotype threat. Resting electroencephalographic activity recorded in white and minority participants was used to predict later estimates of task errors and self-doubt on a presumed measure of intelligence. We assessed spontaneous phase-locking between dipole sources in left lateral parietal cortex (LPC), precuneus/posterior cingulate cortex (P/PCC), and medial prefrontal cortex (MPFC); three regions of the default mode network (DMN) that are integral for self-oriented processing. Results revealed that minorities with greater LPC-P/PCC phase-locking in the theta band reported more accurate error estimations. All individuals experienced less self-doubt to the extent they exhibited greater LPC-MPFC phase-locking in the alpha band but this effect was driven by minorities. Minorities also reported more self-doubt to the extent they overestimated errors. Findings reveal novel neural moderators of stereotype threat effects on subjective experience. Spontaneous synchronization between DMN regions may play a role in anticipatory coping mechanisms that buffer individuals from stereotype threat. PMID:25398433

  13. Peptidic β-sheet binding with Congo Red allows both reduction of error variance and signal amplification for immunoassays.

    PubMed

    Wang, Yunyun; Liu, Ye; Deng, Xinli; Cong, Yulong; Jiang, Xingyu

    2016-12-15

    Although conventional enzyme-linked immunosorbent assays (ELISA) and related assays have been widely applied for the diagnosis of diseases, many of them suffer from large error variance for monitoring the concentration of targets over time, and insufficient limit of detection (LOD) for assaying dilute targets. We herein report a readout mode of ELISA based on the binding between peptidic β-sheet structure and Congo Red. The formation of peptidic β-sheet structure is triggered by alkaline phosphatase (ALP). For the detection of P-Selectin which is a crucial indicator for evaluating thrombus diseases in clinic, the 'β-sheet and Congo Red' mode significantly decreases both the error variance and the LOD (from 9.7ng/ml to 1.1 ng/ml) of detection, compared with commercial ELISA (an existing gold-standard method for detecting P-Selectin in clinic). Considering the wide range of ALP-based antibodies for immunoassays, such novel method could be applicable to the analysis of many types of targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A forward error correction technique using a high-speed, high-rate single chip codec

    NASA Astrophysics Data System (ADS)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  15. An FMEA evaluation of intensity modulated radiation therapy dose delivery failures at tolerance criteria levels.

    PubMed

    Faught, Jacqueline Tonigan; Balter, Peter A; Johnson, Jennifer L; Kry, Stephen F; Court, Laurence E; Stingo, Francesco C; Followill, David S

    2017-11-01

    The objective of this work was to assess both the perception of failure modes in Intensity Modulated Radiation Therapy (IMRT) when the linac is operated at the edge of tolerances given in AAPM TG-40 (Kutcher et al.) and TG-142 (Klein et al.) as well as the application of FMEA to this specific section of the IMRT process. An online survey was distributed to approximately 2000 physicists worldwide that participate in quality services provided by the Imaging and Radiation Oncology Core - Houston (IROC-H). The survey briefly described eleven different failure modes covered by basic quality assurance in step-and-shoot IMRT at or near TG-40 (Kutcher et al.) and TG-142 (Klein et al.) tolerance criteria levels. Respondents were asked to estimate the worst case scenario percent dose error that could be caused by each of these failure modes in a head and neck patient as well as the FMEA scores: Occurrence, Detectability, and Severity. Risk probability number (RPN) scores were calculated as the product of these scores. Demographic data were also collected. A total of 181 individual and three group responses were submitted. 84% were from North America. Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5 to 45 yr (average 18 yr). A total of 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems, and linear accelerator manufacturers were represented. All failure modes received widely varying scores ranging from 1 to 10 for occurrence, at least 1-9 for detectability, and at least 1-7 for severity. Ranking failure modes by RPN scores also resulted in large variability, with each failure mode being ranked both most risky (1st) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (P < 0.01) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors or ranking. Failure modes investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 failure modes was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, reflecting the subjective nature of the FMEA tool. © 2017 American Association of Physicists in Medicine.

  16. Modeling human tracking error in several different anti-tank systems

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1981-01-01

    An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.

  17. [Using some modern mathematical models of postmortem cooling of the human body for the time of death determination].

    PubMed

    Vavilov, A Iu; Viter, V I

    2007-01-01

    Mathematical questions of data errors of modern thermometrical models of postmortem cooling of the human body are considered. The main diagnostic areas used for thermometry are analyzed to minimize these data errors. The authors propose practical recommendations to decrease data errors of determination of prescription of death coming.

  18. The Hinton train disaster.

    PubMed

    Smiley, A M

    1990-10-01

    In February of 1986 a head-on collision occurred between a freight train and a passenger train in western Canada killing 23 people and causing over $30 million of damage. A Commission of Inquiry appointed by the Canadian government concluded that human error was the major reason for the collision. This report discusses the factors contributing to the human error: mainly poor work-rest schedules, the monotonous nature of the train driving task, insufficient information about train movements, and the inadequate backup systems in case of human error.

  19. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  20. The link evaluation terminal for the advanced communications technology satellite experiments program

    NASA Technical Reports Server (NTRS)

    May, Brian D.

    1992-01-01

    The experimental NASA satellite, Advanced Communications Technology Satellite (ACTS), introduces new technology for high throughput 30 to 20 GHz satellite services. Contained in a single communication payload is both a regenerative TDMA system and multiple 800 MHz 'bent pipe' channels routed to spot beams by a switch matrix. While only one mode of operation is typical during any experiment, both modes can operate simultaneously with reduced capability due to sharing of the transponder. NASA-Lewis instituted a ground terminal development program in anticipation of the satellite launch to verify the performance of the switch matrix mode of operations. Specific functions are built into the ground terminal to evaluate rain fade compensation with uplink power control and to monitor satellite transponder performance with bit error rate measurements. These functions were the genesis of the ground terminal's name, Link Evaluation Terminal, often referred to as LET. Connectors are included in LET that allow independent experimenters to run unique modulation or network experiments through ACTS using only the RF transmit and receive portions of LET. Test data indicate that LET will be able to verify important parts of ACTS technology and provide independent experimenters with a useful ground terminal. Lab measurements of major subsystems integrated into LET are presented. Bit error rate is measured with LET in an internal loopback mode.

  1. Dual-modal three-dimensional imaging of single cells with isometric high resolution using an optical projection tomography microscope

    NASA Astrophysics Data System (ADS)

    Miao, Qin; Rahn, J. Richard; Tourovskaia, Anna; Meyer, Michael G.; Neumann, Thomas; Nelson, Alan C.; Seibel, Eric J.

    2009-11-01

    The practice of clinical cytology relies on bright-field microscopy using absorption dyes like hematoxylin and eosin in the transmission mode, while the practice of research microscopy relies on fluorescence microscopy in the epi-illumination mode. The optical projection tomography microscope is an optical microscope that can generate 3-D images of single cells with isometric high resolution both in absorption and fluorescence mode. Although the depth of field of the microscope objective is in the submicron range, it can be extended by scanning the objective's focal plane. The extended depth of field image is similar to a projection in a conventional x-ray computed tomography. Cells suspended in optical gel flow through a custom-designed microcapillary. Multiple pseudoprojection images are taken by rotating the microcapillary. After these pseudoprojection images are further aligned, computed tomography methods are applied to create 3-D reconstruction. 3-D reconstructed images of single cells are shown in both absorption and fluorescence mode. Fluorescence spatial resolution is measured at 0.35 μm in both axial and lateral dimensions. Since fluorescence and absorption images are taken in two different rotations, mechanical error may cause misalignment of 3-D images. This mechanical error is estimated to be within the resolution of the system.

  2. Aerosol Extinction Profile Mapping with Lognormal Distribution Based on MPL Data

    NASA Astrophysics Data System (ADS)

    Lin, T. H.; Lee, T. T.; Chang, K. E.; Lien, W. H.; Liu, G. R.; Liu, C. Y.

    2017-12-01

    This study intends to challenge the profile mapping of aerosol vertical distribution by mathematical function. With the similarity in distribution pattern, lognormal distribution is examined for mapping the aerosol extinction profile based on MPL (Micro Pulse LiDAR) in situ measurements. The variables of lognormal distribution are log mean (μ) and log standard deviation (σ), which will be correlated with the parameters of aerosol optical depht (AOD) and planetary boundary layer height (PBLH) associated with the altitude of extinction peak (Mode) defined in this study. On the base of 10 years MPL data with single peak, the mapping results showed that the mean error of Mode and σ retrievals are 16.1% and 25.3%, respectively. The mean error of σ retrieval can be reduced to 16.5% under the cases of larger distance between PBLH and Mode. The proposed method is further applied to MODIS AOD product in mapping extinction profile for the retrieval of PM2.5 in terms of satellite observations. The results indicated well agreement between retrievals and ground measurements when aerosols under 525 meters are well-mixed. The feasibility of proposed method to satellite remote sensing is also suggested by the case study. Keyword: Aerosol extinction profile, Lognormal distribution, MPL, Planetary boundary layer height (PBLH), Aerosol optical depth (AOD), Mode

  3. Model validity and frequency band selection in operational modal analysis

    NASA Astrophysics Data System (ADS)

    Au, Siu-Kui

    2016-12-01

    Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.

  4. STAMP-Based HRA Considering Causality Within a Sociotechnical System: A Case of Minuteman III Missile Accident.

    PubMed

    Rong, Hao; Tian, Jin

    2015-05-01

    The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.

  5. Experimental studies on the effect of automation on pilot situational awareness in the datalink ATC environment

    NASA Technical Reports Server (NTRS)

    Hahn, Edward C.; Hansman, R. J., Jr.

    1992-01-01

    An experiment to study how automation, when used in conjunction with datalink for the delivery of ATC clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming of datalinked clearances and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.

  6. Finite-time containment control of perturbed multi-agent systems based on sliding-mode control

    NASA Astrophysics Data System (ADS)

    Yu, Di; Ji, Xiang Yang

    2018-01-01

    Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.

  7. An Experimental Study of the Effects of Automation on Pilot Situational Awareness in the Datalink ATC Environment

    NASA Technical Reports Server (NTRS)

    Hahn, Edward C.; Hansman, R. John, Jr.

    1992-01-01

    An experiment to study how automation, when used in conjunction with datalink for the delivery of air traffic control (ATC) clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.

  8. Based on interval type-2 fuzzy-neural network direct adaptive sliding mode control for SISO nonlinear systems

    NASA Astrophysics Data System (ADS)

    Lin, Tsung-Chih

    2010-12-01

    In this paper, a novel direct adaptive interval type-2 fuzzy-neural tracking control equipped with sliding mode and Lyapunov synthesis approach is proposed to handle the training data corrupted by noise or rule uncertainties for nonlinear SISO nonlinear systems involving external disturbances. By employing adaptive fuzzy-neural control theory, the update laws will be derived for approximating the uncertain nonlinear dynamical system. In the meantime, the sliding mode control method and the Lyapunov stability criterion are incorporated into the adaptive fuzzy-neural control scheme such that the derived controller is robust with respect to unmodeled dynamics, external disturbance and approximation errors. In comparison with conventional methods, the advocated approach not only guarantees closed-loop stability but also the output tracking error of the overall system will converge to zero asymptotically without prior knowledge on the upper bound of the lumped uncertainty. Furthermore, chattering effect of the control input will be substantially reduced by the proposed technique. To illustrate the performance of the proposed method, finally simulation example will be given.

  9. Single-Event Effect Performance of a Conductive-Bridge Memory EEPROM

    NASA Technical Reports Server (NTRS)

    Chen, Dakai; Wilcox, Edward; Berg, Melanie; Kim, Hak; Phan, Anthony; Figueiredo, Marco; Seidleck, Christina; LaBel, Kenneth

    2015-01-01

    We investigated the heavy ion single-event effect (SEE) susceptibility of the industry’s first stand-alone memory based on conductive-bridge memory (CBRAM) technology. The device is available as an electrically erasable programmable read-only memory (EEPROM). We found that single-event functional interrupt (SEFI) is the dominant SEE type for each operational mode (standby, dynamic read, and dynamic write/read). SEFIs occurred even while the device is statically biased in standby mode. Worst case SEFIs resulted in errors that filled the entire memory space. Power cycle did not always clear the errors. Thus the corrupted cells had to be reprogrammed in some cases. The device is also vulnerable to bit upsets during dynamic write/read tests, although the frequency of the upsets are relatively low. The linear energy transfer threshold for cell upset is between 10 and 20 megaelectron volts per square centimeter per milligram, with an upper limit cross section of 1.6 times 10(sup -11) square centimeters per bit (95 percent confidence level) at 10 megaelectronvolts per square centimeter per milligram. In standby mode, the CBRAM array appears invulnerable to bit upsets.

  10. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  12. Software fault-tolerance by design diversity DEDIX: A tool for experiments

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.

  13. Limited Transfer of Newly Acquired Movement Patterns across Walking and Running in Humans

    PubMed Central

    Ogawa, Tetsuya; Kawashima, Noritaka; Ogata, Toru; Nakazawa, Kimitaka

    2012-01-01

    The two major modes of locomotion in humans, walking and running, may be regarded as a function of different speed (walking as slower and running as faster). Recent results using motor learning tasks in humans, as well as more direct evidence from animal models, advocate for independence in the neural control mechanisms underlying different locomotion tasks. In the current study, we investigated the possible independence of the neural mechanisms underlying human walking and running. Subjects were tested on a split-belt treadmill and adapted to walking or running on an asymmetrically driven treadmill surface. Despite the acquisition of asymmetrical movement patterns in the respective modes, the emergence of asymmetrical movement patterns in the subsequent trials was evident only within the same modes (walking after learning to walk and running after learning to run) and only partial in the opposite modes (walking after learning to run and running after learning to walk) (thus transferred only limitedly across the modes). Further, the storage of the acquired movement pattern in each mode was maintained independently of the opposite mode. Combined, these results provide indirect evidence for independence in the neural control mechanisms underlying the two locomotive modes. PMID:23029490

  14. Concept of a Fast and Simple Atmospheric Radiative Transfer Model for Aerosol Retrieval

    NASA Astrophysics Data System (ADS)

    Seidel, Felix; Kokhanovsky, Alexander A.

    2010-05-01

    Radiative transfer modelling (RTM) is an indispensable tool for a number of applications, including astrophysics, climate studies and quantitative remote sensing. It simulates the attenuation of light through a translucent medium. Here, we look at the scattering and absorption of solar light on its way to the Earth's surface and back to space or back into a remote sensing instrument. RTM is regularly used in the framework of the so-called atmospheric correction to find properties of the surface. Further, RTM can be inverted to retrieve features of the atmosphere, such as the aerosol optical depth (AOD), for instance. Present-day RTM, such as 6S, MODTRAN, SHARM, RT3, SCIATRAN or RTMOM have errors of only a few percent, however they are rather slow and often not easy to use. We present here a concept for a fast and simple RTM model in the visible spectral range. It is using a blend of different existing RTM approaches with a special emphasis on fast approximative analytical equations and parametrizations. This concept may be helpful for efficient retrieval algorithms, which do not have to rely on the classic look-up-tables (LUT) approach. For example, it can be used to retrieve AOD without complex inversion procedures including multiple iterations. Naturally, there is always a trade-off between speed and modelling accuracy. The code can be run therefore in two different modes. The regular mode provides a reasonable ratio between speed and accuracy, while the optional mode is very fast but less accurate. The normal mode approximates the diffuse scattered light by calculating the first (single scattering) and second order of scattering according to the classical method of successive orders of scattering. The very fast mode calculates only the single scattering approximation, which does not need any slow numerical integration procedure, and uses a simple correction factor to account for multiple scattering. This factor is a parametrization of MODTRAN results, which provide a typical ratio between single and multiple scattered light. A comparison of the presented RTM concept to the widely accepted 6S RTM reveals errors of up to 10% in standard mode. This is acceptable for certain applications. The very fast mode may lead to errors of up to 30%, but it is still able to reproduce qualitatively the results of 6S. An experimental implementation of this RTM concept is written in the common IDL language. It is therefore very flexible and straightforward to be implemented into custom retrieval algorithms of the remote sensing community. The code might also be used to add an atmosphere on top of an existing vegetation-canopy or water RTM. Due to the ease of use of the RTM code and the comprehensibility of the internal equations, the concept might be useful for educational purposes as well. The very fast mode could be of interest for a real-time applications, such as an in-flight instrument performance check for airborne optical sensors. In the future, the concept can be extended to account for scattering according to Mie theory, polarization and gaseous absorption. It is expected that this would reduce the model error to 5% or less.

  15. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    NASA Astrophysics Data System (ADS)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  16. Active control of fan-generated plane wave noise

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Nuckolls, William E.; Santamaria, Odillyn L.; Martinson, Scott D.

    1993-01-01

    Subsonic propulsion systems for future aircraft may incorporate ultra-high bypass ratio ducted fan engines whose dominant noise source is the fan with blade passage frequency less than 1000 Hz. This low frequency combines with the requirement of a short nacelle to diminish the effectiveness of passive duct liners. Active noise control is seen as a viable method to augment the conventional passive treatments. An experiment to control ducted fan noise using a time domain active adaptive system is reported. The control sound source consists of loudspeakers arrayed around the fan duct. The error sensor location is in the fan duct. The purpose of this experiment is to demonstrate that the in-duct error sensor reduces the mode spillover in the far field, thereby increasing the efficiency of the control system. In this first series of tests, the fan is configured so that predominantly zero order circumferential waves are generated. The control system is found to reduce the blade passage frequency tone significantly in the acoustic far field when the mode orders of the noise source and of the control source are the same. The noise reduction is not as great when the mode orders are not the same even though the noise source modes are evanescent, but the control system converges stably and global noise reduction is demonstrated in the far field. Further experimentation is planned in which the performance of the system will be evaluated when higher order radial and spinning modes are generated.

  17. Determination of elastomeric foam parameters for simulations of complex loading.

    PubMed

    Petre, M T; Erdemir, A; Cavanagh, P R

    2006-08-01

    Finite element (FE) analysis has shown promise for the evaluation of elastomeric foam personal protection devices. Although appropriate representation of foam materials is necessary in order to obtain realistic simulation results, material definitions used in the literature vary widely and often fail to account for the multi-mode loading experienced by these devices. This study aims to provide a library of elastomeric foam material parameters that can be used in FE simulations of complex loading scenarios. Twelve foam materials used in footwear were tested in uni-axial compression, simple shear and volumetric compression. For each material, parameters for a common compressible hyperelastic material model used in FE analysis were determined using: (a) compression; (b) compression and shear data; and (c) data from all three tests. Material parameters and Drucker stability limits for the best fits are provided with their associated errors. The material model was able to reproduce deformation modes for which data was provided during parameter determination but was unable to predict behavior in other deformation modes. Simulation results were found to be highly dependent on the extent of the test data used to determine the parameters in the material definition. This finding calls into question the many published results of simulations of complex loading that use foam material parameters obtained from a single mode of testing. The library of foam parameters developed here presents associated errors in three deformation modes that should provide for a more informed selection of material parameters.

  18. Remote beating of parallel or orthogonally polarized dual-wavelength optical carriers for 5G millimeter-wave radio-over-fiber link.

    PubMed

    Wang, Huai-Yung; Chi, Yu-Chieh; Lin, Gong-Ru

    2016-08-08

    A novel millimeter-wave radio over fiber (MMW-RoF) link at carrier frequency of 35-GHz is proposed with the use of remotely beating MMW generation from reference master and injected slave colorless laser diode (LD) carriers at orthogonally polarized dual-wavelength injection-locking. The slave colorless LD supports lasing one of the dual-wavelength master modes with orthogonal polarizations, which facilitates the single-mode direct modulation of the quadrature amplitude modulation (QAM) orthogonal frequency division multiplexing (OFDM) data. Such an injected single-carrier encoding and coupled dual-carrier transmission with orthogonal polarization effectively suppresses the cross-heterodyne mode-beating intensity noise, the nonlinear modulation (NLM) and four-wave mixing (FWM) sidemodes during injection locking and fiber transmission. In 25-km single-mode fiber (SMF) based wireline system, the dual-carrier under single-mode encoding provides baseband 24-Gbit/s 64-QAM OFDM transmission with an error vector magnitude (EVM) of 8.8%, a bit error rate (BER) of 3.7 × 10-3, a power penalty of <1.5 dB. After remotely self-beating for wireless transmission, the beat MMW carrier at 35 GHz can deliver the passband 16-QAM OFDM at 4 Gbit/s to show corresponding EVM and BER of 15.5% and 1.4 × 10-3, respectively, after 25-km SMF and 1.6-m free-space transmission.

  19. The cerebellum for jocks and nerds alike.

    PubMed

    Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J

    2014-01-01

    Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.

  20. The cerebellum for jocks and nerds alike

    PubMed Central

    Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.

    2014-01-01

    Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338

  1. Switchable in-line monitor for multi-dimensional multiplexed photonic integrated circuit.

    PubMed

    Chen, Guanyu; Yu, Yu; Ye, Mengyuan; Zhang, Xinliang

    2016-06-27

    A flexible monitor suitable for the discrimination of on-chip transmitted mode division multiplexed (MDM) and wavelength division multiplexed (WDM) signals is proposed and fabricated. By selectively extracting part of the incoming signals through the tunable wavelength and mode dependent drop filter, the in-line and switchable monitor can discriminate the wavelength, mode and power information of the transmitted signals. Being different from a conventional mode and wavelength demultiplexer, the monitor is specifically designed to ensure a flexible in-line monitoring. For demonstration, three mode and three wavelength multiplexed signals are successfully processed. Assisted by the integrated photodetectors (PDs), both the measured photo currents and eye diagrams validate the performance of the proposed device. The bit error ratio (BER) measurement results show less than 0.4 dB power penalty between different modes and ~2 dB power penalty for single wavelength and WDM cases under 10-9 BER level.

  2. On-chip WDM mode-division multiplexing interconnection with optional demodulation function.

    PubMed

    Ye, Mengyuan; Yu, Yu; Chen, Guanyu; Luo, Yuchan; Zhang, Xinliang

    2015-12-14

    We propose and fabricate a wavelength-division-multiplexing (WDM) compatible and multi-functional mode-division-multiplexing (MDM) integrated circuit, which can perform the mode conversion and multiplexing for the incoming multipath WDM signals, avoiding the wavelength conflict. An phase-to-intensity demodulation function can be optionally applied within the circuit while performing the mode multiplexing. For demonstration, 4 × 10 Gb/s non-return-to-zero differential phase shift keying (NRZ-DPSK) signals are successfully processed, with open and clear eye diagrams. Measured bit error ratio (BER) results show less than 1 dB receive sensitivity variation for three modes and four wavelengths with demodulation. In the case without demodulation, the average power penalties at 4 wavelengths are -1.5, -3 and -3.5 dB for TE₀-TE₀, TE₀-TE₁ and TE₀-TE₂ mode conversions, respectively. The proposed flexible scheme can be used at the interface of long-haul and on-chip communication systems.

  3. Mode selecting switch using multimode interference for on-chip optical interconnects.

    PubMed

    Priti, Rubana B; Pishvai Bazargani, Hamed; Xiong, Yule; Liboiron-Ladouceur, Odile

    2017-10-15

    A novel mode selecting switch (MSS) is experimentally demonstrated for on-chip mode-division multiplexing (MDM) optical interconnects. The MSS consists of a Mach-Zehnder interferometer with tapered multi-mode interference couplers and TiN thermo-optic phase shifters for conversion and switching between the optical data encoded on the fundamental and first-order quasi-transverse electric (TE) modes. The C-band MSS exhibits a >25  dB switching extinction ratio and < -12 dB crosstalk. We validate the dynamic switching with a 25.8 kHz gating signal measuring switching times for both TE0 and TE1 modes of <10.9  μs. All channels exhibit less than 1.7 dB power penalty at a 10 -12 bit error rate, while switching the non-return-to-zero PRBS-31 data signals at 10  Gb/s.

  4. MO-G-BRE-09: Validating FMEA Against Incident Learning Data: A Study in Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F; Cao, N; Young, L

    2014-06-15

    Purpose: Though FMEA (Failure Mode and Effects Analysis) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge it has never been validated against actual incident learning data. The objective of this study was to perform an FMEA analysis of an SBRT (Stereotactic Body Radiation Therapy) treatment planning process and validate this against data recorded within an incident learning system. Methods: FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, and dosimetrists. Potential failure modes were identified through a systematic review of the workflow process. Failuremore » modes were rated for severity, occurrence, and detectability on a scale of 1 to 10 and RPN (Risk Priority Number) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that had been active for two years. Differences were identified. Results: FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. FMEA failed to anticipate 13 of these events, among which 3 were registered with severity ratings of severe or critical in the incident learning system. Combining both methods yielded a total of 76 failure modes, and when scored for RPN the 13 events missed by FMEA ranked within the middle half of all failure modes. Conclusion: FMEA, though valuable, is subject to certain limitations, among them the limited ability to anticipate all potential errors for a given process. This FMEA exercise failed to identify a significant number of possible errors (17%). Integration of FMEA with retrospective incident data may be able to render an improved overview of risks within a process.« less

  5. Speed tracking and synchronization of multiple motors using ring coupling control and adaptive sliding mode control.

    PubMed

    Li, Le-Bao; Sun, Ling-Ling; Zhang, Sheng-Zhou; Yang, Qing-Quan

    2015-09-01

    A new control approach for speed tracking and synchronization of multiple motors is developed, by incorporating an adaptive sliding mode control (ASMC) technique into a ring coupling synchronization control structure. This control approach can stabilize speed tracking of each motor and synchronize its motion with other motors' motion so that speed tracking errors and synchronization errors converge to zero. Moreover, an adaptive law is exploited to estimate the unknown bound of uncertainty, which is obtained in the sense of Lyapunov stability theorem to minimize the control effort and attenuate chattering. Performance comparisons with parallel control, relative coupling control and conventional PI control are investigated on a four-motor synchronization control system. Extensive simulation results show the effectiveness of the proposed control scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Parameter dependence of the MCNP electron transport in determining dose distributions.

    PubMed

    Reynaert, N; Palmans, H; Thierens, H; Jeraj, R

    2002-10-01

    In this paper, a detailed study of the electron transport in MCNP is performed, separating the effects of the energy binning technique on the energy loss rate, the scattering angles, and the sub-step length as a function of energy. As this problem is already well known, in this paper we focus on the explanation as to why the default mode of MCNP can lead to large deviations. The resolution dependence was investigated as well. An error in the MCNP code in the energy binning technique in the default mode (DBCN 18 card = 0) was revealed, more specific in the updating of cross sections when a sub-step is performed corresponding to a high-energy loss. This updating error is not present in the ITS mode (DBCN 18 card = 1) and leads to a systematically lower dose deposition rate in the default mode. The effect is present for all energies studied (0.5-10 MeV) and depends on the geometrical resolution of the scoring regions and the energy grid resolution. The effect of the energy binning technique is of the same order of that of the updating error for energies below 2 MeV, and becomes less important for higher energies. For a 1 MeV point source surrounded by homogeneous water, the deviation of the default MCNP results at short distances attains 9% and remains approximately the same for all energies. This effect could be corrected by removing the completion of an energy step each time an electron changes from an energy bin during a sub-step. Another solution consists of performing all calculations in the ITS mode. Another problem is the resolution dependence, even in the ITS mode. The higher the resolution is chosen (the smaller the scoring regions) the faster the energy is deposited along the electron track. It is proven that this is caused by starting a new energy step when crossing a surface. The resolution effect should be investigated for every specific case when calculating dose distributions around beta sources. The resolution should not be higher than 0.85*(1-EFAC)*CSDA, where EFAC is the energy loss per energy step and CSDA a continuous slowing down approximation range. This effect could as well be removed by determining the cross sections for energy loss and multiple scattering at the average energy of an energy step and by sampling the cross sections for each sub-step. Overall, we conclude that MCNP cannot be used without a caution due to possible errors in the electron transport. When care is taken, it is possible to obtain correct results that are in agreement with other Monte Carlo codes.

  7. Comparison of mode estimation methods and application in molecular clock analysis

    NASA Technical Reports Server (NTRS)

    Hedges, S. Blair; Shah, Prachi

    2003-01-01

    BACKGROUND: Distributions of time estimates in molecular clock studies are sometimes skewed or contain outliers. In those cases, the mode is a better estimator of the overall time of divergence than the mean or median. However, different methods are available for estimating the mode. We compared these methods in simulations to determine their strengths and weaknesses and further assessed their performance when applied to real data sets from a molecular clock study. RESULTS: We found that the half-range mode and robust parametric mode methods have a lower bias than other mode methods under a diversity of conditions. However, the half-range mode suffers from a relatively high variance and the robust parametric mode is more susceptible to bias by outliers. We determined that bootstrapping reduces the variance of both mode estimators. Application of the different methods to real data sets yielded results that were concordant with the simulations. CONCLUSION: Because the half-range mode is a simple and fast method, and produced less bias overall in our simulations, we recommend the bootstrapped version of it as a general-purpose mode estimator and suggest a bootstrap method for obtaining the standard error and 95% confidence interval of the mode.

  8. Human Error as an Emergent Property of Action Selection and Task Place-Holding.

    PubMed

    Tamborello, Franklin P; Trafton, J Gregory

    2017-05-01

    A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types. With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people's error rates, as does the presence of other information to maintain in an active state. The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay. The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton's Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair's Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task. Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory. The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.

  9. Capillary bridge stability and dynamics: Active electrostatic stress control and acoustic radiation pressure

    NASA Astrophysics Data System (ADS)

    Wei, Wei

    2005-11-01

    In low gravity, the stability of liquid bridges and other systems having free surfaces is affected by the ambient vibration of the spacecraft. Such vibrations are expected to excite capillary modes. The lowest unstable mode of cylindrical liquid bridges, the (2,0) mode, is particularly sensitive to the vibration when the ratio of the bridge length to the diameter approaches pi. In this work, a Plateau tank has been used to simulate the weightless condition. An optical system has been used to detect the (2,0) mode oscillation amplitude and generate an error signal which is determined by the oscillation amplitude. This error signal is used by the feedback system to produce proper voltages on the electrodes which are concentric with the electrically conducting, grounded bridge. A mode-coupled electrostatic stress is thus generated on the surface of the bridge. The feedback system is designed such that the modal force applied by the Maxwell stress can be proportional to the modal amplitude or modal velocity, which is the derivative of the modal amplitude. Experiments done in the Plateau tank demonstrate that the damping of the capillary oscillation can be enhanced by using the electrostatic stress in proportion to the modal velocity. On the other hand, using the electrostatic stress in proportion to the modal amplitude can raise the natural frequency of the bridge oscillation. If a spacecraft vibration frequency is close to a capillary mode frequency, the amplitude gain can be used to shift the mode frequency away from that of the spacecraft and simultaneously add some artificial damping to further reduce the effect of g-jitter. It is found that the decay of a bridge (2,0) mode oscillation is well modeled by a Duffing equation with a small cubic soft-spring term. The nonlinearity of the bridge (3,0) mode is also studied. The experiments reveal the hysteresis of (3,0) mode bridge oscillations, and this behavior is a property of the soft nonlinearity of the bridge. Relevant to acoustical bridge stabilization, the theoretical radiation force on a compressible cylinder in an acoustic standing wave is also investigated.

  10. Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods

    NASA Astrophysics Data System (ADS)

    Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.

    2017-04-01

    In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.

  11. The Study of Pre-sight for the Air Transportation Systems

    NASA Astrophysics Data System (ADS)

    Zhao, Wenzhi; Guo, Zhi

    The internal reasons of the deceptive air crashes were studied for the purpose of explanation its nature to reduce the accidents and the injuries. An approach of synthetic comparison is deployed. The Concorde aircraft accident was analyzed detailed to recognize what the real human error is. All the passengers were killed in the crash. Other similar accidents relating aircraft DC-10, Tu-154 and Tu-144 were discussed about the risk perception of the pilots and system. Certain higher survival rate was reached in these cases. Then the conclusion is obtained that the real problem is the lack of pre-sight for the risk in operation and system management relating to the pilots and the organizations. The conclusion suggests some changes be made in complex system management regulations and in thinking mode for the general applications to safety.

  12. Long wavelength optical mode frequencies and the Anderson-Gruneisen parameter for alkali halide crystals

    NASA Astrophysics Data System (ADS)

    Gupta, A. P.; Shanker, Jai

    1980-02-01

    The relation between long wavelength optical mode frequencies and the Anderson-Gruneisen parameter δ for alkali halides studied by Madan suffers from a mathematical error which is rectified in the present communication. A theoretical analysis of δ is presented adopting six potential functions for the short range repulsion energy. Values of δ and γTO calculated from the Varshni-Shukla potential are found in closest agreement with experimental data.

  13. Dealing with Beam Structure in PIXIE

    NASA Technical Reports Server (NTRS)

    Fixsen, D. J.; Kogut, Alan; Hill, Robert S.; Nagler, Peter C.; Seals, Lenward T., III; Howard, Joseph M.

    2016-01-01

    Measuring the B-mode polarization of the CMB radiation requires a detailed understanding of the projection of the detector onto the sky. We show how the combination of scan strategy and processing generates a cylindrical beam for the spectrum measurement. Both the instrumental design and the scan strategy reduce the cross coupling between the temperature variations and the B-modes. As with other polarization measurements some post processing may be required to eliminate residual errors.

  14. How Many Grid Points are Required for Time Accurate Simulations Scheme Selection and Scale-Discriminant Stabilization

    DTIC Science & Technology

    2015-11-24

    spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: 
 Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06

  15. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  16. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  17. Time-scaling based sliding mode control for Neuromuscular Electrical Stimulation under uncertain relative degrees.

    PubMed

    Oliveira, Tiago Roux; Costa, Luiz Rennó; Catunda, João Marcos Yamasaki; Pino, Alexandre Visintainer; Barbosa, William; Souza, Márcio Nogueira de

    2017-06-01

    This paper addresses the application of the sliding mode approach to control the arm movements by artificial recruitment of muscles using Neuromuscular Electrical Stimulation (NMES). Such a technique allows the activation of motor nerves using surface electrodes. The goal of the proposed control system is to move the upper limbs of subjects through electrical stimulation to achieve a desired elbow angular displacement. Since the human neuro-motor system has individual characteristics, being time-varying, nonlinear and subject to uncertainties, the use of advanced robust control schemes may represent a better solution than classical Proportional-Integral (PI) controllers and model-based approaches, being simpler than more sophisticated strategies using fuzzy logic or neural networks usually applied in this control problem. The objective is the introduction of a new time-scaling base sliding mode control (SMC) strategy for NMES and its experimental evaluation. The main qualitative advantages of the proposed controller via time-scaling procedure are its independence of the knowledge of the plant relative degree and the design/tuning simplicity. The developed sliding mode strategy allows for chattering alleviation due to the impact of the integrator in smoothing the control signal. In addition, no differentiator is applied to construct the sliding surface. The stability analysis of the closed-loop system is also carried out by using singular perturbation methods. Experimental results are conducted with healthy volunteers as well as stroke patients. Quantitative results show a reduction of 45% in terms of root mean square (RMS) error (from 5.9° to [Formula: see text] ) in comparison with PI control scheme, which is similar to that obtained in the literature. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Human factors analysis for a 2D enroute moving map application

    NASA Astrophysics Data System (ADS)

    Pschierer, Christian; Wipplinger, Patrick; Schiefele, Jens; Cromer, Scot; Laurin, John; Haffner, Skip

    2005-05-01

    The paper describes flight trials performed in Centennial, CO with a Piper Cheyenne from Marinvent. Six pilots flew the Cheyenne in twelve enroute segments between Denver Centennial and Colorado Springs. Two different settings (paper chart, enroute moving map) were evaluated with randomized settings. The flight trial goal was to evaluate the objective performance of pilots compared among the different settings. As dependent variables, positional accuracy and situational awareness probe (SAP) were measured. Analysis was conducted by an ANOVA test. In parallel, all pilots answered subjective Cooper-Harper, NASA TLX, situation awareness rating technique (SART), Display Readability Rating and debriefing questionnaires. The tested enroute moving map application has Jeppesen chart compliant symbologies for high-enroute and low-enroute. It has a briefing mode were all information found on today"s enroute paper chart together with a loaded flight plan are displayed in a north-up orientation. The execution mode displays a loaded flight plan routing together with only pertinent flight route relevant information in either a track up or north up orientation. Depiction of an own ship symbol is possible in both modes. All text and symbols are deconflicted. Additional information can be obtained by clicking on symbols. Terrain and obstacle data can be displayed for enhanced situation awareness. The result shows that pilots flying the 2D enroute moving map display perform no worse than pilots with conventional systems. Flight technical error and workload are equivalent or lower, situational awareness is higher than on conventional paper charts.

  19. Gravitational wave spectroscopy of binary neutron star merger remnants with mode stacking

    NASA Astrophysics Data System (ADS)

    Yang, Huan; Paschalidis, Vasileios; Yagi, Kent; Lehner, Luis; Pretorius, Frans; Yunes, Nicolás

    2018-01-01

    A binary neutron star coalescence event has recently been observed for the first time in gravitational waves, and many more detections are expected once current ground-based detectors begin operating at design sensitivity. As in the case of binary black holes, gravitational waves generated by binary neutron stars consist of inspiral, merger, and postmerger components. Detecting the latter is important because it encodes information about the nuclear equation of state in a regime that cannot be probed prior to merger. The postmerger signal, however, can only be expected to be measurable by current detectors for events closer than roughly ten megaparsecs, which given merger rate estimates implies a low probability of observation within the expected lifetime of these detectors. We carry out Monte Carlo simulations showing that the dominant postmerger signal (the ℓ=m =2 mode) from individual binary neutron star mergers may not have a good chance of observation even with the most sensitive future ground-based gravitational wave detectors proposed so far (the Einstein Telescope and Cosmic Explorer, for certain equations of state, assuming a full year of operation, the latest merger rates, and a detection threshold corresponding to a signal-to-noise ratio of 5). For this reason, we propose two methods that stack the postmerger signal from multiple binary neutron star observations to boost the postmerger detection probability. The first method follows a commonly used practice of multiplying the Bayes factors of individual events. The second method relies on an assumption that the mode phase can be determined from the inspiral waveform, so that coherent mode stacking of the data from different events becomes possible. We find that both methods significantly improve the chances of detecting the dominant postmerger signal, making a detection very likely after a year of observation with Cosmic Explorer for certain equations of state. We also show that in terms of detection, coherent stacking is more efficient in accumulating confidence for the presence of postmerger oscillations in a signal than the first method. Moreover, assuming the postmerger signal is detected with Cosmic Explorer via stacking, we estimate through a Fisher analysis that the peak frequency can be measured to a statistical error of ˜4 - 20 Hz for certain equations of state. Such an error corresponds to a neutron star radius measurement to within ˜15 - 56 m , a fractional relative error ˜4 %, suggesting that systematic errors from theoretical modeling (≳100 m ) may dominate the error budget.

  20. Why do adult dogs (Canis familiaris) commit the A-not-B search error?

    PubMed

    Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József

    2014-02-01

    It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.

  1. Crowned spur gears - Methods for generation and Tooth Contact Analysis. II - Generation of the pinion tooth surface by a surface of revolution

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Handschuh, R. F.; Zhang, J.

    1988-01-01

    A method for generation of crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc second for the numerical examples). Tooth Contact Analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and the bearing contact.

  2. ATM QoS Experiments Using TCP Applications: Performance of TCP/IP Over ATM in a Variety of Errored Links

    NASA Technical Reports Server (NTRS)

    Frantz, Brian D.; Ivancic, William D.

    2001-01-01

    Asynchronous Transfer Mode (ATM) Quality of Service (QoS) experiments using the Transmission Control Protocol/Internet Protocol (TCP/IP) were performed for various link delays. The link delay was set to emulate a Wide Area Network (WAN) and a Satellite Link. The purpose of these experiments was to evaluate the ATM QoS requirements for applications that utilize advance TCP/IP protocols implemented with large windows and Selective ACKnowledgements (SACK). The effects of cell error, cell loss, and random bit errors on throughput were reported. The detailed test plan and test results are presented herein.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, M A; Mignant, D L; Macintosh, B A

    In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less

  4. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M D; Cole, S; Frenk, C S

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less

  5. Common medial frontal mechanisms of adaptive control in humans and rodents

    PubMed Central

    Frank, Michael J.; Laubach, Mark

    2013-01-01

    In this report, we describe how common brain networks within the medial frontal cortex facilitate adaptive behavioral control in rodents and humans. We demonstrate that low frequency oscillations below 12 Hz are dramatically modulated after errors in humans over mid-frontal cortex and in rats within prelimbic and anterior cingulate regions of medial frontal cortex. These oscillations were phase-locked between medial frontal cortex and motor areas in both rats and humans. In rats, single neurons that encoded prior behavioral outcomes were phase-coherent with low-frequency field oscillations particularly after errors. Inactivating medial frontal regions in rats led to impaired behavioral adjustments after errors, eliminated the differential expression of low frequency oscillations after errors, and increased low-frequency spike-field coupling within motor cortex. Our results describe a novel mechanism for behavioral adaptation via low-frequency oscillations and elucidate how medial frontal networks synchronize brain activity to guide performance. PMID:24141310

  6. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  7. Use of failure mode and effects analysis for proactive identification of communication and handoff failures from organ procurement to transplantation.

    PubMed

    Steinberger, Dina M; Douglas, Stephen V; Kirschbaum, Mark S

    2009-09-01

    A multidisciplinary team from the University of Wisconsin Hospital and Clinics transplant program used failure mode and effects analysis to proactively examine opportunities for communication and handoff failures across the continuum of care from organ procurement to transplantation. The team performed a modified failure mode and effects analysis that isolated the multiple linked, serial, and complex information exchanges occurring during the transplantation of one solid organ. Failure mode and effects analysis proved effective for engaging a diverse group of persons who had an investment in the outcome in analysis and discussion of opportunities to improve the system's resilience for avoiding errors during a time-pressured and complex process.

  8. Standard solar model. II - g-modes

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.

    1992-01-01

    The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).

  9. Advanced Interval Type-2 Fuzzy Sliding Mode Control for Robot Manipulator.

    PubMed

    Hwang, Ji-Hwan; Kang, Young-Chang; Park, Jong-Wook; Kim, Dong W

    2017-01-01

    In this paper, advanced interval type-2 fuzzy sliding mode control (AIT2FSMC) for robot manipulator is proposed. The proposed AIT2FSMC is a combination of interval type-2 fuzzy system and sliding mode control. For resembling a feedback linearization (FL) control law, interval type-2 fuzzy system is designed. For compensating the approximation error between the FL control law and interval type-2 fuzzy system, sliding mode controller is designed, respectively. The tuning algorithms are derived in the sense of Lyapunov stability theorem. Two-link rigid robot manipulator with nonlinearity is used to test and the simulation results are presented to show the effectiveness of the proposed method that can control unknown system well.

  10. Reusable Launch Vehicle Control in Multiple Time Scale Sliding Modes

    NASA Technical Reports Server (NTRS)

    Shtessel, Yuri

    1999-01-01

    A reusable launch vehicle control problem during ascent is addressed via multiple-time scaled continuous sliding mode control. The proposed sliding mode controller utilizes a two-loop structure and provides robust, de-coupled tracking of both orientation angle command profiles and angular rate command profiles in the presence of bounded external disturbances and plant uncertainties. Sliding mode control causes the angular rate and orientation angle tracking error dynamics to be constrained to linear, de-coupled, homogeneous, and vector valued differential equations with desired eigenvalues placement. The dual-time scale sliding mode controller was designed for the X-33 technology demonstration sub-orbital launch vehicle in the launch mode. 6DOF simulation results show that the designed controller provides robust, accurate, de-coupled tracking of the orientation angle command profiles in presence of external disturbances and vehicle inertia uncertainties. It creates possibility to operate the X-33 vehicle in an aircraft-like mode with reduced pre-launch adjustment of the control system.

  11. Laser Doppler, velocimeter system for turbine stator cascade studies and analysis of statistical biasing errors

    NASA Technical Reports Server (NTRS)

    Seasholtz, R. G.

    1977-01-01

    A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.

  12. Attitude Control Subsystem for the Advanced Communications Technology Satellite

    NASA Technical Reports Server (NTRS)

    Hewston, Alan W.; Mitchell, Kent A.; Sawicki, Jerzy T.

    1996-01-01

    This paper provides an overview of the on-orbit operation of the Attitude Control Subsystem (ACS) for the Advanced Communications Technology Satellite (ACTS). The three ACTS control axes are defined, including the means for sensing attitude and determining the pointing errors. The desired pointing requirements for various modes of control as well as the disturbance torques that oppose the control are identified. Finally, the hardware actuators and control loops utilized to reduce the attitude error are described.

  13. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  14. [Failure mode and effects analysis on computerized drug prescriptions].

    PubMed

    Paredes-Atenciano, J A; Roldán-Aviña, J P; González-García, Mercedes; Blanco-Sánchez, M C; Pinto-Melero, M A; Pérez-Ramírez, C; Calvo Rubio-Burgos, Miguel; Osuna-Navarro, F J; Jurado-Carmona, A M

    2015-01-01

    To identify and analyze errors in drug prescriptions of patients treated in a "high resolution" hospital by applying a Failure mode and effects analysis (FMEA).Material and methods A multidisciplinary group of medical specialties and nursing analyzed medical records where drug prescriptions were held in free text format. An FMEA was developed in which the risk priority index (RPI) was obtained from a cross-sectional observational study using an audit of the medical records, carried out in 2 phases: 1) Pre-intervention testing, and (2) evaluation of improvement actions after the first analysis. An audit sample size of 679 medical records from a total of 2,096 patients was calculated using stratified sampling and random selection of clinical events. Prescription errors decreased by 22.2% in the second phase. FMEA showed a greater RPI in "unspecified route of administration" and "dosage unspecified", with no significant decreases observed in the second phase, although it did detect, "incorrect dosing time", "contraindication due to drug allergy", "wrong patient" or "duplicate prescription", which resulted in the improvement of prescriptions. Drug prescription errors have been identified and analyzed by FMEA methodology, improving the clinical safety of these prescriptions. This tool allows updates of electronic prescribing to be monitored. To avoid such errors would require the mandatory completion of all sections of a prescription. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.

  15. SU-E-T-192: FMEA Severity Scores - Do We Really Know?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonigan, J; Johnson, J; Kry, S

    2014-06-01

    Purpose: Failure modes and effects analysis (FMEA) is a subjective risk mitigation technique that has not been applied to physics-specific quality management practices. There is a need for quantitative FMEA data as called for in the literature. This work focuses specifically on quantifying FMEA severity scores for physics components of IMRT delivery and comparing to subjective scores. Methods: Eleven physical failure modes (FMs) for head and neck IMRT dose calculation and delivery are examined near commonly accepted tolerance criteria levels. Phantom treatment planning studies and dosimetry measurements (requiring decommissioning in several cases) are performed to determine the magnitude of dosemore » delivery errors for the FMs (i.e., severity of the FM). Resultant quantitative severity scores are compared to FMEA scores obtained through an international survey and focus group studies. Results: Physical measurements for six FMs have resulted in significant PTV dose errors up to 4.3% as well as close to 1 mm significant distance-to-agreement error between PTV and OAR. Of the 129 survey responses, the vast majority of the responders used Varian machines with Pinnacle and Eclipse planning systems. The average years of experience was 17, yet familiarity with FMEA less than expected. Survey reports perception of dose delivery error magnitude varies widely, in some cases 50% difference in dose delivery error expected amongst respondents. Substantial variance is also seen for all FMs in occurrence, detectability, and severity scores assigned with average variance values of 5.5, 4.6, and 2.2, respectively. Survey shows for MLC positional FM(2mm) average of 7.6% dose error expected (range 0–50%) compared to 2% error seen in measurement. Analysis of ranking in survey, treatment planning studies, and quantitative value comparison will be presented. Conclusion: Resultant quantitative severity scores will expand the utility of FMEA for radiotherapy and verify accuracy of FMEA results compared to highly variable subjective scores.« less

  16. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  17. Accuracy and Spatial Variability in GPS Surveying for Landslide Mapping on Road Inventories at a Semi-Detailed Scale: the Case in Colombia

    NASA Astrophysics Data System (ADS)

    Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.

    2016-06-01

    The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.

  18. Mode Transitions in Glass Cockpit Aircraft: Results of a Field Study

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Kirlik, Alex; Shafto, Michael (Technical Monitor)

    1995-01-01

    One consequence of increased levels of automation in complex control systems is the presence of modes. A mode is a particular configuration of a control system that defines how human command inputs are interpreted. In complex systems, modes also often determine a specific allocation of control authority between the human and automated systems. Even in simple static devices (e.g., electronic watches, word processors), the presence of modes has been found to cause problems in either-the acquisition or production of skilled performance. Many of these problems arise due to the fact that the selection of a mode causes device behavior to be mediated by hidden internal state information. For these simple systems, many of these interaction problems can be solved by the design of appropriate feedback to communicate internal state information to the human operator. In complex dynamic systems, however, the design issues associated with modes seem to trancend the problem of merely communicating internal state information via displayed feedback. In complex supervisory control systems (e.g., aircraft, spacecraft, military command and control), a key function of modes is the selection of a particular configuration of control authority between the human operator and automated control systems. One mode may result in full manual control, another may result in a mix of manual and automatic control, while a third may result in full automatic control over the entire system. The human operator selects an appropriate mode as a function of current goals, operating conditions, and operating procedures. Thus, the operator is put in a position of essentially trying to control two coupled dynamic systems: the target system itself, and also a highly complex suite of automation controlling the target system. From a historical perspective, it should probably not come as a surprise that very little information is available to guide the design of mode-oriented control systems. The topic of function allocation (i.e., the proper division of control authority among human and computer) has a long history in human-machine systems research. Although this research has produced some relevant guidelines, a design approach capable of defining appropriate allocations of control function between the human and automation is not yet available. As a result, the function allocation decision itself has been allocated to the operator, to be performed in real-time, in the operation of mode-oriented control systems. A variety of documented aircraft accidents and incidents suggest that the real-time selection and monitoring of control modes is a weak link in the effective operation of complex supervisory control systems. Research in human-machine systems and human-computer interaction has barely scraped the surface of the problem of understanding how operators manage this task.The purpose of this paper is to present the results of a field study which examined how operators manage mode selection in a complex supervisory control system. Data on mode engagements using the Boeing B757/767 auto-flight system were collected during approach and descent into four major airports in the East Coast of the United States. Protocols documenting mode selection, automatic mode changes, pilot actions, quantitative records of flight-path variables, and verbal reports during and after mode engagements were collected by an observer from the jumpseat. Observations were conducted on two typical trips between three airports. Each trip was be replicated 11 times, which yielded a total of 22 trips and 66 legs on which data were collected. All data collected concerned the same flight numbers, and therefore, the same time of day, same type of aircraft, and identical operational environments (e.g., ATC facilities, weather patterns, traffic flow etc.)

  19. New spectrophotometric/chemometric assisted methods for the simultaneous determination of imatinib, gemifloxacin, nalbuphine and naproxen in pharmaceutical formulations and human urine

    NASA Astrophysics Data System (ADS)

    Belal, F.; Ibrahim, F.; Sheribah, Z. A.; Alaa, H.

    2018-06-01

    In this paper, novel univariate and multivariate regression methods along with model-updating technique were developed and validated for the simultaneous determination of quaternary mixture of imatinib (IMB), gemifloxacin (GMI), nalbuphine (NLP) and naproxen (NAP). The univariate method is extended derivative ratio (EDR) which depends on measuring every drug in the quaternary mixture by using a ternary mixture of the other three drugs as divisor. Peak amplitudes were measured at 294 nm, 250 nm, 283 nm and 239 nm within linear concentration ranges of 4.0-17.0, 3.0-15.0, 4.0-80.0 and 1.0-6.0 μg mL-1 for IMB, GMI, NLP and NAB, respectively. Multivariate methods adopted are partial least squares (PLS) in original and derivative mode. These models were constructed for simultaneous determination of the studied drugs in the ranges of 4.0-8.0, 3.0-11.0, 10.0-18.0 and 1.0-3.0 μg mL-1 for IMB, GMI, NLP and NAB, respectively, by using eighteen mixtures as a calibration set and seven mixtures as a validation set. The root mean square error of predication (RMSEP) were 0.09 and 0.06 for IMB, 0.14 and 0.13 for GMI, 0.07 and 0.02 for NLP and 0.64 and 0.27 for NAP by PLS in original and derivative mode, respectively. Both models were successfully applied for analysis of IMB, GMI, NLP and NAP in their dosage forms. Updated PLS in derivative mode and EDR were applied for determination of the studied drugs in spiked human urine. The obtained results were statistically compared with those obtained by the reported methods giving a conclusion that there is no significant difference regarding accuracy and precision.

  20. New spectrophotometric/chemometric assisted methods for the simultaneous determination of imatinib, gemifloxacin, nalbuphine and naproxen in pharmaceutical formulations and human urine.

    PubMed

    Belal, F; Ibrahim, F; Sheribah, Z A; Alaa, H

    2018-06-05

    In this paper, novel univariate and multivariate regression methods along with model-updating technique were developed and validated for the simultaneous determination of quaternary mixture of imatinib (IMB), gemifloxacin (GMI), nalbuphine (NLP) and naproxen (NAP). The univariate method is extended derivative ratio (EDR) which depends on measuring every drug in the quaternary mixture by using a ternary mixture of the other three drugs as divisor. Peak amplitudes were measured at 294nm, 250nm, 283nm and 239nm within linear concentration ranges of 4.0-17.0, 3.0-15.0, 4.0-80.0 and 1.0-6.0μgmL -1 for IMB, GMI, NLP and NAB, respectively. Multivariate methods adopted are partial least squares (PLS) in original and derivative mode. These models were constructed for simultaneous determination of the studied drugs in the ranges of 4.0-8.0, 3.0-11.0, 10.0-18.0 and 1.0-3.0μgmL -1 for IMB, GMI, NLP and NAB, respectively, by using eighteen mixtures as a calibration set and seven mixtures as a validation set. The root mean square error of predication (RMSEP) were 0.09 and 0.06 for IMB, 0.14 and 0.13 for GMI, 0.07 and 0.02 for NLP and 0.64 and 0.27 for NAP by PLS in original and derivative mode, respectively. Both models were successfully applied for analysis of IMB, GMI, NLP and NAP in their dosage forms. Updated PLS in derivative mode and EDR were applied for determination of the studied drugs in spiked human urine. The obtained results were statistically compared with those obtained by the reported methods giving a conclusion that there is no significant difference regarding accuracy and precision. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  2. Mode division multiplexing using an orbital angular momentum mode sorter and MIMO-DSP over a graded-index few-mode optical fibre

    PubMed Central

    Huang, Hao; Milione, Giovanni; Lavery, Martin P. J.; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A.; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R.; Willner, Alan E.

    2015-01-01

    Mode division multiplexing (MDM)– using a multimode optical fiber’s N spatial modes as data channels to transmit N independent data streams – has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting’s 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than −15 dB mode selectivity and without cascaded beam splitting’s 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM−1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10−3. PMID:26450398

  3. Mode division multiplexing using an orbital angular momentum mode sorter and MIMO-DSP over a graded-index few-mode optical fibre.

    PubMed

    Huang, Hao; Milione, Giovanni; Lavery, Martin P J; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R; Willner, Alan E

    2015-10-09

    Mode division multiplexing (MDM)- using a multimode optical fiber's N spatial modes as data channels to transmit N independent data streams - has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting's 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than -15 dB mode selectivity and without cascaded beam splitting's 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM-1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10(-3).

  4. Mode division multiplexing using an orbital angular momentum mode sorter and MIMO-DSP over a graded-index few-mode optical fibre

    NASA Astrophysics Data System (ADS)

    Huang, Hao; Milione, Giovanni; Lavery, Martin P. J.; Xie, Guodong; Ren, Yongxiong; Cao, Yinwen; Ahmed, Nisar; An Nguyen, Thien; Nolan, Daniel A.; Li, Ming-Jun; Tur, Moshe; Alfano, Robert R.; Willner, Alan E.

    2015-10-01

    Mode division multiplexing (MDM)- using a multimode optical fiber’s N spatial modes as data channels to transmit N independent data streams - has received interest as it can potentially increase optical fiber data transmission capacity N-times with respect to single mode optical fibers. Two challenges of MDM are (1) designing mode (de)multiplexers with high mode selectivity (2) designing mode (de)multiplexers without cascaded beam splitting’s 1/N insertion loss. One spatial mode basis that has received interest is that of orbital angular momentum (OAM) modes. In this paper, using a device referred to as an OAM mode sorter, we show that OAM modes can be (de)multiplexed over a multimode optical fiber with higher than -15 dB mode selectivity and without cascaded beam splitting’s 1/N insertion loss. As a proof of concept, the OAM modes of the LP11 mode group (OAM-1,0 and OAM+1,0), each carrying 20-Gbit/s polarization division multiplexed and quadrature phase shift keyed data streams, are transmitted 5km over a graded-index, few-mode optical fibre. Channel crosstalk is mitigated using 4 × 4 multiple-input-multiple-output digital-signal-processing with <1.5 dB power penalties at a bit-error-rate of 2 × 10-3.

  5. Cutting the Cord: Discrimination and Command Responsibility in Autonomous Lethal Weapons

    DTIC Science & Technology

    2014-02-13

    machine responses to identical stimuli, and it was the job of a third party human “witness” to determine which participant was man and which was...machines may be error free, but there are potential benefits to be gained through autonomy if machines can meet or exceed human performance in...lieu of human operators and reap the benefits that autonomy provides. Human and Machine Error It would be foolish to assert that either humans

  6. Effects of vibration on inertial wind-tunnel model attitude measurement devices

    NASA Technical Reports Server (NTRS)

    Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen

    1994-01-01

    Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.

  7. Managing human error in aviation.

    PubMed

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  8. Empirical Analysis of Systematic Communication Errors.

    DTIC Science & Technology

    1981-09-01

    human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

  9. HRA Aerospace Challenges

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2013-01-01

    Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.

  10. Identifying Human Factors Issues in Aircraft Maintenance Operations

    NASA Technical Reports Server (NTRS)

    Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)

    1995-01-01

    Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.

  11. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less

  12. The Swiss cheese model of adverse event occurrence--Closing the holes.

    PubMed

    Stein, James E; Heiss, Kurt

    2015-12-01

    Traditional surgical attitude regarding error and complications has focused on individual failings. Human factors research has brought new and significant insights into the occurrence of error in healthcare, helping us identify systemic problems that injure patients while enhancing individual accountability and teamwork. This article introduces human factors science and its applicability to teamwork, surgical culture, medical error, and individual accountability. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Behind Human Error: Cognitive Systems, Computers and Hindsight

    DTIC Science & Technology

    1994-12-01

    evaluations • Organize and/or conduct workshops and conferences CSERIAC is a Department of Defense Information Analysis Cen- ter sponsored by the Defense...Process 185 Neutral Observer Criteria 191 Error Analysis as Causal Judgment 193 Error as Information 195 A Fundamental Surprise 195 What is Human...Kahnemann, 1974), and in risk analysis (Dougherty and Fragola, 1990). The discussions have continued in a wide variety of forums, includ- ing the

  14. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Application of Failure Mode and Effects Analysis to Intraoperative Radiation Therapy Using Mobile Electron Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciocca, Mario, E-mail: mario.ciocca@cnao.it; Cantone, Marie-Claire; Veronese, Ivan

    2012-02-01

    Purpose: Failure mode and effects analysis (FMEA) represents a prospective approach for risk assessment. A multidisciplinary working group of the Italian Association for Medical Physics applied FMEA to electron beam intraoperative radiation therapy (IORT) delivered using mobile linear accelerators, aiming at preventing accidental exposures to the patient. Methods and Materials: FMEA was applied to the IORT process, for the stages of the treatment delivery and verification, and consisted of three steps: 1) identification of the involved subprocesses; 2) identification and ranking of the potential failure modes, together with their causes and effects, using the risk probability number (RPN) scoring system,more » based on the product of three parameters (severity, frequency of occurrence and detectability, each ranging from 1 to 10); 3) identification of additional safety measures to be proposed for process quality and safety improvement. RPN upper threshold for little concern of risk was set at 125. Results: Twenty-four subprocesses were identified. Ten potential failure modes were found and scored, in terms of RPN, in the range of 42-216. The most critical failure modes consisted of internal shield misalignment, wrong Monitor Unit calculation and incorrect data entry at treatment console. Potential causes of failure included shield displacement, human errors, such as underestimation of CTV extension, mainly because of lack of adequate training and time pressures, failure in the communication between operators, and machine malfunctioning. The main effects of failure were represented by CTV underdose, wrong dose distribution and/or delivery, unintended normal tissue irradiation. As additional safety measures, the utilization of a dedicated staff for IORT, double-checking of MU calculation and data entry and finally implementation of in vivo dosimetry were suggested. Conclusions: FMEA appeared as a useful tool for prospective evaluation of patient safety in radiotherapy. The application of this method to IORT lead to identify three safety measures for risk mitigation.« less

  16. NSLS-II BPM System Protection from Rogue Mode Coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blednykh, A.; Bach, B.; Borrelli, A.

    2011-03-28

    Rogue mode RF shielding has been successfully designed and implemented into the production multipole vacuum chambers. In order to avoid systematic errors in the NSLS-II BPM system we introduced frequency shift of HOM's by using RF metal shielding located in the antechamber slot of each multipole vacuum chamber. To satisfy the pumping requirement the face of the shielding has been perforated with roughly 50 percent transparency. It stays clear of synchrotron radiation in each chamber.

  17. DeMi Payload Progress Update and Adaptive Optics (AO) Control Comparisons – Meeting Space AO Requirements on a CubeSat

    NASA Astrophysics Data System (ADS)

    Grunwald, Warren; Holden, Bobby; Barnes, Derek; Allan, Gregory; Mehrle, Nicholas; Douglas, Ewan S.; Cahoy, Kerri

    2018-01-01

    The Deformable Mirror (DeMi) CubeSat mission utilizes an Adaptive Optics (AO) control loop to correct incoming wavefronts as a technology demonstration for space-based imaging missions, such as high contrast observations (Earthlike exoplanets) and steering light into core single mode fibers for amplification. While AO has been used extensively on ground based systems to correct for atmospheric aberrations, operating an AO system on-board a small satellite presents different challenges. The DeMi payload 140 actuator MEMS deformable mirror (DM) corrects the incoming wavefront in four different control modes: 1) internal observation with a Shack-Hartmann Wavefront Sensor (SHWFS), 2) internal observation with an image plane sensor, 3) external observation with a SHWFS, and 4) external observation with an image plane sensor. All modes have wavefront aberration from two main sources, time-invariant launch disturbances that have changed the optical path from the expected path when calibrated in the lab and very low temporal frequency thermal variations as DeMi orbits the Earth. The external observation modes has additional error from: the pointing precision error from the attitude control system and reaction wheel jitter. Updates on DeMi’s mechanical, thermal, electrical, and mission design are also presented. The analysis from the DeMi payload simulations and testing provides information on the design options when developing space-based AO systems.

  18. Performance Evaluation of Dual-axis Tracking System of Parabolic Trough Solar Collector

    NASA Astrophysics Data System (ADS)

    Ullah, Fahim; Min, Kang

    2018-01-01

    A parabolic trough solar collector with the concentration ratio of 24 was developed in the College of Engineering; Nanjing Agricultural University, China with the using of the TracePro software an optical model built. Effects of single-axis and dual-axis tracking modes, azimuth and elevating angle tracking errors on the optical performance were investigated and the thermal performance of the solar collector was experimentally measured. The results showed that the optical efficiency of the dual-axis tracking was 0.813% and its year average value was 14.3% and 40.9% higher than that of the eat-west tracking mode and north-south tracking mode respectively. Further, form the results of the experiment, it was concluded that the optical efficiency was affected significantly by the elevation angle tracking errors which should be kept below 0.6o. High optical efficiency could be attained by using dual-tracking mode even though the tracking precision of one axis was degraded. The real-time instantaneous thermal efficiency of the collector reached to 0.775%. In addition, the linearity of the normalized efficiency was favorable. The curve of the calculated thermal efficiency agreed well with the normalized instantaneous efficiency curve derived from the experimental data and the maximum difference between them was 10.3%. This type of solar collector should be applied in middle-scale thermal collection systems.

  19. An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients

    NASA Astrophysics Data System (ADS)

    Moser, Steven; Lee, Peter; Podoleanu, Adrian

    2015-04-01

    Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of <1%. The design has been synthesized on a Xilinx Spartan-6 XC6SLX45 FPGA. The resource utilisation on this device is <3% of slice registers, <15% of slice LUTs, and approximately 48% of available DSP blocks independent of the Shack-Hartmann grid size. Block RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.

  20. Correlation of anomalous write error rates and ferromagnetic resonance spectrum in spin-transfer-torque-magnetic-random-access-memory devices containing in-plane free layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evarts, Eric R.; Rippard, William H.; Pufall, Matthew R.

    In a small fraction of magnetic-tunnel-junction-based magnetic random-access memory devices with in-plane free layers, the write-error rates (WERs) are higher than expected on the basis of the macrospin or quasi-uniform magnetization reversal models. In devices with increased WERs, the product of effective resistance and area, tunneling magnetoresistance, and coercivity do not deviate from typical device properties. However, the field-swept, spin-torque, ferromagnetic resonance (FS-ST-FMR) spectra with an applied DC bias current deviate significantly for such devices. With a DC bias of 300 mV (producing 9.9 × 10{sup 6} A/cm{sup 2}) or greater, these anomalous devices show an increase in the fraction of the power presentmore » in FS-ST-FMR modes corresponding to higher-order excitations of the free-layer magnetization. As much as 70% of the power is contained in higher-order modes compared to ≈20% in typical devices. Additionally, a shift in the uniform-mode resonant field that is correlated with the magnitude of the WER anomaly is detected at DC biases greater than 300 mV. These differences in the anomalous devices indicate a change in the micromagnetic resonant mode structure at high applied bias.« less

Top