Sample records for reduce operator error

  1. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  2. Visual feedback system to reduce errors while operating roof bolting machines

    PubMed Central

    Steiner, Lisa J.; Burgess-Limerick, Robin; Eiter, Brianna; Porter, William; Matty, Tim

    2015-01-01

    Problem Operators of roof bolting machines in underground coal mines do so in confined spaces and in very close proximity to the moving equipment. Errors in the operation of these machines can have serious consequences, and the design of the equipment interface has a critical role in reducing the probability of such errors. Methods An experiment was conducted to explore coding and directional compatibility on actual roof bolting equipment and to determine the feasibility of a visual feedback system to alert operators of critical movements and to also alert other workers in close proximity to the equipment to the pending movement of the machine. The quantitative results of the study confirmed the potential for both selection errors and direction errors to be made, particularly during training. Results Subjective data confirmed a potential benefit of providing visual feedback of the intended operations and movements of the equipment. Impact This research may influence the design of these and other similar control systems to provide evidence for the use of warning systems to improve operator situational awareness. PMID:23398703

  3. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  4. Comprehensive Anti-error Study on Power Grid Dispatching Based on Regional Regulation and Integration

    NASA Astrophysics Data System (ADS)

    Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang

    2018-01-01

    With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.

  5. Data mining of air traffic control operational errors

    DOT National Transportation Integrated Search

    2006-01-01

    In this paper we present the results of : applying data mining techniques to identify patterns and : anomalies in air traffic control operational errors (OEs). : Reducing the OE rate is of high importance and remains a : challenge in the aviation saf...

  6. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  7. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  8. Reducing major rule violations in commuter rail operations : the role of distraction and attentional errors

    DOT National Transportation Integrated Search

    2012-10-22

    Recent accidents in commuter rail operations and analyses of rule violations have highlighted the need for : better understanding of the contributory role of distraction and attentional errors. Distracted driving has : thoroughly been studied in rece...

  9. Servo control booster system for minimizing following error

    DOEpatents

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  10. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  11. Analysis of a range estimator which uses MLS angle measurements

    NASA Technical Reports Server (NTRS)

    Downing, David R.; Linse, Dennis

    1987-01-01

    A concept that uses the azimuth signal from a microwave landing system (MLS) combined with onboard airspeed and heading data to estimate the horizontal range to the runway threshold is investigated. The absolute range error is evaluated for trajectories typical of General Aviation (GA) and commercial airline operations (CAO). These include constant intercept angles for GA and CAO, and complex curved trajectories for CAO. It is found that range errors of 4000 to 6000 feet at the entry of MLS coverage which then reduce to 1000-foot errors at runway centerline intercept are possible for GA operations. For CAO, errors at entry into MLS coverage of 2000 feet which reduce to 300 feet at runway centerline interception are possible.

  12. Servo control booster system for minimizing following error

    DOEpatents

    Wise, William L.

    1985-01-01

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, .DELTA.S.sub.R, on a continuous real-time basis for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error .gtoreq..DELTA.S.sub.R, to produce precise position correction signals. When the command-to-response error is less than .DELTA.S.sub.R, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  13. Operator Variability in Scan Positioning is a Major Component of HR-pQCT Precision Error and is Reduced by Standardized Training

    PubMed Central

    Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.

    2017-01-01

    Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931

  14. Reducing patient identification errors related to glucose point-of-care testing.

    PubMed

    Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron

    2011-01-01

    Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  15. Reducing patient identification errors related to glucose point-of-care testing

    PubMed Central

    Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron

    2011-01-01

    Background: Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT. PMID:21633490

  16. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  17. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  18. Rotational wind indicator enhances control of rotated displays

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Pavel, Misha

    1991-01-01

    Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.

  19. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  20. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  1. Permanent-File-Validation Utility Computer Program

    NASA Technical Reports Server (NTRS)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  2. Using wide area differential GPS to improve total system error for precision flight operations

    NASA Astrophysics Data System (ADS)

    Alter, Keith Warren

    Total System Error (TSE) refers to an aircraft's total deviation from the desired flight path. TSE can be divided into Navigational System Error (NSE), the error attributable to the aircraft's navigation system, and Flight Technical Error (FTE), the error attributable to pilot or autopilot control. Improvement in either NSE or FTE reduces TSE and leads to the capability to fly more precise flight trajectories. The Federal Aviation Administration's Wide Area Augmentation System (WAAS) became operational for non-safety critical applications in 2000 and will become operational for safety critical applications in 2002. This navigation service will provide precise 3-D positioning (demonstrated to better than 5 meters horizontal and vertical accuracy) for civil aircraft in the United States. Perhaps more importantly, this navigation system, which provides continuous operation across large regions, enables new flight instrumentation concepts which allow pilots to fly aircraft significantly more precisely, both for straight and curved flight paths. This research investigates the capabilities of some of these new concepts, including the Highway-In-The Sky (HITS) display, which not only improves FTE but also reduces pilot workload when compared to conventional flight instrumentation. Augmentation to the HITS display, including perspective terrain and terrain alerting, improves pilot situational awareness. Flight test results from demonstrations in Juneau, AK, and Lake Tahoe, CA, provide evidence of the overall feasibility of integrated, low-cost flight navigation systems based on these concepts. These systems, requiring no more computational power than current-generation low-end desktop computers, have immediate applicability to general aviation flight from Cessnas to business jets and can support safer and ultimately more economical flight operations. Commercial airlines may also, over time, benefit from these new technologies.

  3. Risk-Aware Planetary Rover Operation: Autonomous Terrain Classification and Path Planning

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Fuchs, Thoams J.; Steffy, Amanda; Maimone, Mark; Yen, Jeng

    2015-01-01

    Identifying and avoiding terrain hazards (e.g., soft soil and pointy embedded rocks) are crucial for the safety of planetary rovers. This paper presents a newly developed groundbased Mars rover operation tool that mitigates risks from terrain by automatically identifying hazards on the terrain, evaluating their risks, and suggesting operators safe paths options that avoids potential risks while achieving specified goals. The tool will bring benefits to rover operations by reducing operation cost, by reducing cognitive load of rover operators, by preventing human errors, and most importantly, by significantly reducing the risk of the loss of rovers.

  4. Reduced error signalling in medication-naive children with ADHD: associations with behavioural variability and post-error adaptations

    PubMed Central

    Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom

    2016-01-01

    Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332

  5. Performance Data Errors in Air Carrier Operations: Causes and Countermeasures

    NASA Technical Reports Server (NTRS)

    Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.

    2012-01-01

    Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.

  6. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  7. Spacecraft methods and structures with enhanced attitude control that facilitates gyroscope substitutions

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)

    2004-01-01

    Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.

  8. Cost-effectiveness of the streamflow-gaging program in Wyoming

    USGS Publications Warehouse

    Druse, S.A.; Wahl, K.L.

    1988-01-01

    This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)

  9. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  10. Method of evaluating, expanding, and collapsing connectivity regions within dynamic systems

    DOEpatents

    Bailey, David A [Schenectady, NY

    2004-11-16

    An automated process defines and maintains connectivity regions within a dynamic network. The automated process requires an initial input of a network component around which a connectivity region will be defined. The process automatically and autonomously generates a region around the initial input, stores the region's definition, and monitors the network for a change. Upon detecting a change in the network, the effect is evaluated, and if necessary the regions are adjusted and redefined to accommodate the change. Only those regions of the network affected by the change will be updated. This process eliminates the need for an operator to manually evaluate connectivity regions within a network. Since the automated process maintains the network, the reliance on an operator is minimized; thus, reducing the potential for operator error. This combination of region maintenance and reduced operator reliance, results in a reduction of overall error.

  11. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  12. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less

  13. Lessons from aviation - the role of checklists in minimally invasive cardiac surgery.

    PubMed

    Hussain, S; Adams, C; Cleland, A; Jones, P M; Walsh, G; Kiaii, B

    2016-01-01

    We describe an adverse event during minimally invasive cardiac surgery that resulted in a multi-disciplinary review of intra-operative errors and the creation of a procedural checklist. This checklist aims to prevent errors of omission and communication failures that result in increased morbidity and mortality. We discuss the application of the aviation - led "threats and errors model" to medical practice and the role of checklists and other strategies aimed at reducing medical errors. © The Author(s) 2015.

  14. Task Decomposition Model for Dispatchers in Dynamic Scheduling of Demand Responsive Transit Systems

    DOT National Transportation Integrated Search

    2000-06-01

    Since the passage of ADA, the demand for paratransit service is steadily increasing. Paratransit companies are relying on computer automation to streamline dispatch operations, increase productivity and reduce operator stress and error. Little resear...

  15. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  16. Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Erzberger, Hainz

    2003-01-01

    An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.

  17. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  18. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  19. Implementation of Testing Equipment for Asphalt Materials : Tech Summary

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt material and mixture testing were evaluated under this study. Each of these devices is designed to reduce testing time considerably and reduce operator error by automating the testing process. The Thery...

  20. Implementation of testing equipment for asphalt materials : tech summary.

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt material and mixture testing were evaluated : under this study. Each of these devices is designed to reduce testing time considerably and reduce : operator error by automating the testing process. The T...

  1. Teamwork and error in the operating room: analysis of skills and roles.

    PubMed

    Catchpole, K; Mishra, A; Handa, A; McCulloch, P

    2008-04-01

    To analyze the effects of surgical, anesthetic, and nursing teamwork skills on technical outcomes. The value of team skills in reducing adverse events in the operating room is presently receiving considerable attention. Current work has not yet identified in detail how the teamwork and communication skills of surgeons, anesthetists, and nurses affect the course of an operation. Twenty-six laparoscopic cholecystectomies and 22 carotid endarterectomies were studied using direct observation methods. For each operation, teams' skills were scored for the whole team, and for nursing, surgical, and anesthetic subteams on 4 dimensions (leadership and management [LM]; teamwork and cooperation; problem solving and decision making; and situation awareness). Operating time, errors in surgical technique, and other procedural problems and errors were measured as outcome parameters for each operation. The relationships between teamwork scores and these outcome parameters within each operation were examined using analysis of variance and linear regression. Surgical (F(2,42) = 3.32, P = 0.046) and anesthetic (F(2,42) = 3.26, P = 0.048) LM had significant but opposite relationships with operating time in each operation: operating time increased significantly with higher anesthetic but decreased with higher surgical LM scores. Errors in surgical technique had a strong association with surgical situation awareness (F(2,42) = 7.93, P < 0.001) in each operation. Other procedural problems and errors were related to the intraoperative LM skills of the nurses (F(5,1) = 3.96, P = 0.027). Detailed analysis of team interactions and dimensions is feasible and valuable, yielding important insights into relationships between nontechnical skills, technical performance, and operative duration. These results support the concept that interventions designed to improve teamwork and communication may have beneficial effects on technical performance and patient outcome.

  2. Fatigue proofing: The role of protective behaviours in mediating fatigue-related risk in a defence aviation environment.

    PubMed

    Dawson, Drew; Cleggett, Courtney; Thompson, Kirrilly; Thomas, Matthew J W

    2017-02-01

    In the military or emergency services, operational requirements and/or community expectations often preclude formal prescriptive working time arrangements as a practical means of reducing fatigue-related risk. In these environments, workers sometimes employ adaptive or protective behaviours informally to reduce the risk (i.e. likelihood or consequence) associated with a fatigue-related error. These informal behaviours enable employees to reduce risk while continuing to work while fatigued. In this study, we documented the use of informal protective behaviours in a group of defence aviation personnel including flight crews. Semi-structured interviews were conducted to determine whether and which protective behaviours were used to mitigate fatigue-related error. The 18 participants were from aviation-specific trades and included aircrew (pilots and air-crewman) and aviation maintenance personnel (aeronautical engineers and maintenance personnel). Participants identified 147 ways in which they and/or others act to reduce the likelihood or consequence of a fatigue-related error. These formed seven categories of fatigue-reduction strategies. The two most novel categories are discussed in this paper: task-related and behaviour-based strategies. Broadly speaking, these results indicate that fatigued military flight and maintenance crews use protective 'fatigue-proofing' behaviours to reduce the likelihood and/or consequence of fatigue-related error and were aware of the potential benefits. It is also important to note that these behaviours are not typically part of the formal safety management system. Rather, they have evolved spontaneously as part of the culture around protecting team performance under adverse operating conditions. When compared with previous similar studies, aviation personnel were more readily able to understand the idea of fatigue proofing than those from a fire-fighting background. These differences were thought to reflect different cultural attitudes toward error and formal training using principles of Crew Resource Management and Threat and Error Management. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Error-Tolerant Quasi-Paraboloidal Solar Concentrator

    NASA Technical Reports Server (NTRS)

    Wagner, Howard A.

    1988-01-01

    Scalloping reflector surface reduces sensitivity to manufacturing and aiming errors. Contrary to intuition, most effective shape of concentrating reflector for solar heat engine is not perfect paraboloid. According to design studies for Space Station solar concentrator, scalloped, nonimaging approximation to perfect paraboloid offers better overall performance in view of finite apparent size of Sun, imperfections of real equipment, and cost of accommodating these complexities. Scalloped-reflector concept also applied to improve performance while reducing cost of manufacturing and operation of terrestrial solar concentrator.

  4. Accuracy of image-guided surgical navigation using near infrared (NIR) optical tracking

    NASA Astrophysics Data System (ADS)

    Jakubovic, Raphael; Farooq, Hamza; Alarcon, Joseph; Yang, Victor X. D.

    2015-03-01

    Spinal surgery is particularly challenging for surgeons, requiring a high level of expertise and precision without being able to see beyond the surface of the bone. Accurate insertion of pedicle screws is critical considering perforation of the pedicle can result in profound clinical consequences including spinal cord, nerve root, arterial injury, neurological deficits, chronic pain, and/or failed back syndrome. Various navigation systems have been designed to guide pedicle screw fixation. Computed tomography (CT)-based image guided navigation systems increase the accuracy of screw placement allowing for 3- dimensional visualization of the spinal anatomy. Current localization techniques require extensive preparation and introduce spatial deviations. Use of near infrared (NIR) optical tracking allows for realtime navigation of the surgery by utilizing spectral domain multiplexing of light, greatly enhancing the surgeon's situation awareness in the operating room. While the incidence of pedicle screw perforation and complications have been significantly reduced with the introduction of modern navigational technologies, some error exists. Several parameters have been suggested including fiducial localization and registration error, target registration error, and angular deviation. However, many of these techniques quantify error using the pre-operative CT and an intra-operative screenshot without assessing the true screw trajectory. In this study we quantified in-vivo error by comparing the true screw trajectory to the intra-operative trajectory. Pre- and post- operative CT as well as intra-operative screenshots were obtained for a cohort of patients undergoing spinal surgery. We quantified entry point error and angular deviation in the axial and sagittal planes.

  5. MPI Runtime Error Detection with MUST: Advances in Deadlock Detection

    DOE PAGES

    Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...

    2013-01-01

    The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less

  6. Cost-effectiveness of the Federal stream-gaging program in Virginia

    USGS Publications Warehouse

    Carpenter, D.H.

    1985-01-01

    Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  7. Threat and error management for anesthesiologists: a predictive risk taxonomy

    PubMed Central

    Ruskin, Keith J.; Stiegler, Marjorie P.; Park, Kellie; Guffey, Patrick; Kurup, Viji; Chidester, Thomas

    2015-01-01

    Purpose of review Patient care in the operating room is a dynamic interaction that requires cooperation among team members and reliance upon sophisticated technology. Most human factors research in medicine has been focused on analyzing errors and implementing system-wide changes to prevent them from recurring. We describe a set of techniques that has been used successfully by the aviation industry to analyze errors and adverse events and explain how these techniques can be applied to patient care. Recent findings Threat and error management (TEM) describes adverse events in terms of risks or challenges that are present in an operational environment (threats) and the actions of specific personnel that potentiate or exacerbate those threats (errors). TEM is a technique widely used in aviation, and can be adapted for the use in a medical setting to predict high-risk situations and prevent errors in the perioperative period. A threat taxonomy is a novel way of classifying and predicting the hazards that can occur in the operating room. TEM can be used to identify error-producing situations, analyze adverse events, and design training scenarios. Summary TEM offers a multifaceted strategy for identifying hazards, reducing errors, and training physicians. A threat taxonomy may improve analysis of critical events with subsequent development of specific interventions, and may also serve as a framework for training programs in risk mitigation. PMID:24113268

  8. 78 FR 20423 - Oil and Gas and Sulphur Operations in the Outer Continental Shelf-Revisions to Safety and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-05

    ... to reduce human error and organizational failure. On September 14, 2011, the Bureau of Ocean Energy... system based on API RP 75, owners and operators would be required to formulate policy and objectives... safety and environmental records, encourage the use of performance-based operating practices, and...

  9. Measuring Seebeck Coefficient

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor)

    2015-01-01

    A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.

  10. System safety management: A new discipline

    NASA Technical Reports Server (NTRS)

    Pope, W. C.

    1971-01-01

    The systems theory is discussed in relation to safety management. It is suggested that systems safety management, as a new discipline, holds great promise for reducing operating errors, conserving labor resources, avoiding operating costs due to mistakes, and for improving managerial techniques. It is pointed out that managerial failures or system breakdowns are the basic reasons for human errors and condition defects. In this respect, a recommendation is made that safety engineers stop visualizing the problem only with the individual (supervisor or employee) and see the problem from the systems point of view.

  11. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    PubMed

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Estimation of clear-sky insolation using satellite and ground meteorological data

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.; Darnell, W. L.; Gupta, S. K.

    1983-01-01

    Ground based pyranometer measurements were combined with meteorological data from the Tiros N satellite in order to estimate clear-sky insolations at five U.S. sites for five weeks during the spring of 1979. The estimates were used to develop a semi-empirical model of clear-sky insolation for the interpretation of input data from the Tiros Operational Vertical Sounder (TOVS). Using only satellite data, the estimated standard errors in the model were about 2 percent. The introduction of ground based data reduced errors to around 1 percent. It is shown that although the errors in the model were reduced by only 1 percent, TOVS data products are still adequate for estimating clear-sky insolation.

  13. Multiplate Radiation Shields: Investigating Radiational Heating Errors

    NASA Astrophysics Data System (ADS)

    Richardson, Scott James

    1995-01-01

    Multiplate radiation shield errors are examined using the following techniques: (1) analytic heat transfer analysis, (2) optical ray tracing, (3) numerical fluid flow modeling, (4) laboratory testing, (5) wind tunnel testing, and (6) field testing. Guidelines for reducing radiational heating errors are given that are based on knowledge of the temperature sensor to be used, with the shield being chosen to match the sensor design. Small, reflective sensors that are exposed directly to the air stream (not inside a filter as is the case for many temperature and relative humidity probes) should be housed in a shield that provides ample mechanical and rain protection while impeding the air flow as little as possible; protection from radiation sources is of secondary importance. If a sensor does not meet the above criteria (i.e., is large or absorbing), then a standard Gill shield performs reasonably well. A new class of shields, called part-time aspirated multiplate radiation shields, are introduced. This type of shield consists of a multiplate design usually operated in a passive manner but equipped with a fan-forced aspiration capability to be used when necessary (e.g., low wind speed). The fans used here are 12 V DC that can be operated with a small dedicated solar panel. This feature allows the fan to operate when global solar radiation is high, which is when the largest radiational heating errors usually occur. A prototype shield was constructed and field tested and an example is given in which radiational heating errors were reduced from 2 ^circC to 1.2 ^circC. The fan was run continuously to investigate night-time low wind speed errors and the prototype shield reduced errors from 1.6 ^ circC to 0.3 ^circC. Part-time aspirated shields are an inexpensive alternative to fully aspirated shields and represent a good compromise between cost, power consumption, reliability (because they should be no worse than a standard multiplate shield if the fan fails), and accuracy. In addition, it is possible to modify existing passive shields to incorporate part-time aspiration, thus making them even more cost-effective. Finally, a new shield is described that incorporates a large diameter top plate that is designed to shade the lower portion of the shield. This shield increases flow through it by 60%, compared to the Gill design and it is likely to reduce radiational heating errors, although it has not been tested.

  14. A simulator investigation of the use of digital data link for pilot/ATC communications in a single pilot operation

    NASA Technical Reports Server (NTRS)

    Hinton, David A.; Lohr, Gary W.

    1988-01-01

    Studies have shown that radio communications between pilots and air traffic control contribute to high pilot workload and are subject to various errors. These errors result from congestion on the voice radio channel, and missed and misunderstood messages. The use of digital data link has been proposed as a means of reducing this workload and error rate. A critical factor, however, in determining the potential benefit of data link will be the interface between future data link systems and the operator of those systems, both in the air and on the ground. The purpose of this effort was to evaluate the pilot interface with various levels of data link capability, in simulated general aviation, single-pilot instrument flight rule operations. Results show that the data link reduced demands on pilots' short-term memory, reduced the number of communication transmissions, and permitted the pilots to more easily allocate time to critical cockpit tasks while receiving air traffic control messages. The pilots who participated unanimously indicated a preference for data link communications over voice-only communications. There were, however, situations in which the pilot preferred the use of voice communications, and the ability for pilots to delay processing the data link messages, during high workload events, caused delays in the acknowledgement of messages to air traffic control.

  15. Effects of Cloud on Goddard Lidar Observatory for Wind (GLOW) Performance and Analysis of Associated Errors

    NASA Astrophysics Data System (ADS)

    Bacha, Tulu

    The Goddard Lidar Observatory for Wind (GLOW), a mobile direct detection Doppler LIDAR based on molecular backscattering for measurement of wind in the troposphere and lower stratosphere region of atmosphere is operated and its errors characterized. It was operated at Howard University Beltsville Center for Climate Observation System (BCCOS) side by side with other operating instruments: the NASA/Langely Research Center Validation Lidar (VALIDAR), Leosphere WLS70, and other standard wind sensing instruments. The performance of Goddard Lidar Observatory for Wind (GLOW) is presented for various optical thicknesses of cloud conditions. It was also compared to VALIDAR under various conditions. These conditions include clear and cloudy sky regions. The performance degradation due to the presence of cirrus clouds is quantified by comparing the wind speed error to cloud thickness. The cloud thickness is quantified in terms of aerosol backscatter ratio (ASR) and cloud optical depth (COD). ASR and COD are determined from Howard University Raman Lidar (HURL) operating at the same station as GLOW. The wind speed error of GLOW was correlated with COD and aerosol backscatter ratio (ASR) which are determined from HURL data. The correlation related in a weak linear relationship. Finally, the wind speed measurements of GLOW were corrected using the quantitative relation from the correlation relations. Using ASR reduced the GLOW wind error from 19% to 8% in a thin cirrus cloud and from 58% to 28% in a relatively thick cloud. After correcting for cloud induced error, the remaining error is due to shot noise and atmospheric variability. Shot-noise error is the statistical random error of backscattered photons detected by photon multiplier tube (PMT) can only be minimized by averaging large number of data recorded. The atmospheric backscatter measured by GLOW along its line-of-sight direction is also used to analyze error due to atmospheric variability within the volume of measurement. GLOW scans in five different directions (vertical and at elevation angles of 45° in north, south, east, and west) to generate wind profiles. The non-uniformity of the atmosphere in all scanning directions is a factor contributing to the measurement error of GLOW. The atmospheric variability in the scanning region leads to difference in the intensity of backscattered signals for scanning directions. Taking the ratio of the north (east) to south (west) and comparing the statistical differences lead to a weak linear relation between atmospheric variability and line-of-sights wind speed differences. This relation was used to make correction which reduced by about 50%.

  16. [From aviation to surgery: the challenge of safety].

    PubMed

    Suva, D; Haller, G; Lübbeke-Wolff, A; Macheret, F; Kindler, V; Hoffmeyer, P

    2011-03-23

    Medical errors result in 44,000 to 98,000 deaths per year in the United States of America. Within the surgical specialties, half of these errors occur in the operating room. The origin of these errors is multifactorial, and is generally associated with problems in communication and teamwork. In order to improve safety in the operating room, many hospitals now propose to the medical staff "crew resource management" (CRM) training programs inspired by the aviation industry. This approach favors a better utilization of surgical checklists, improves efficiency during chirurgical interventions, and reduces patient mortality. In October 2009 we introduced a CRM course within the department of surgery at the Geneva University Hospitals. We are presenting this program as well as the first results following its application.

  17. SAR image formation with azimuth interpolation after azimuth transform

    DOEpatents

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  18. The Automation-by-Expertise-by-Training Interaction.

    PubMed

    Strauch, Barry

    2017-03-01

    I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance. Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation. I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors. Automated systems continue to provide capabilities exceeding operators' need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise. Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors. Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.

  19. Long Term Mean Local Time of the Ascending Node Prediction

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2007-01-01

    Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.

  20. Cost effectiveness of the US Geological Survey's stream-gaging programs in New Hampshire and Vermont

    USGS Publications Warehouse

    Smath, J.A.; Blackey, F.E.

    1986-01-01

    Data uses and funding sources were identified for the 73 continuous stream gages currently (1984) being operated. Eight stream gages were identified as having insufficient reason to continue their operation. Parts of New Hampshire and Vermont were identified as needing additional hydrologic data. New gages should be established in these regions as funds become available. Alternative methods for providing hydrologic data at the stream gaging stations currently being operated were found to lack the accuracy that is required for their intended use. The current policy for operation of the stream gages requires a net budget of $297,000/yr. The average standard error of estimation of the streamflow records is 17.9%. This overall level of accuracy could be maintained with a budget of $285,000 if resources were redistributed among gages. Cost-effective analysis indicates that with the present budget, the average standard error could be reduced to 16.6%. A minimum budget of $278,000 is required to operate the present stream gaging program. Below this level, the gages and recorders would not receive the proper service and maintenance. At the minimum budget, the average standard error would be 20.4%. The loss of correlative data is a significant component of the error in streamflow records, especially at lower budgetary levels. (Author 's abstract)

  1. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  2. Experimental methods to validate measures of emotional state and readiness for duty in critical operations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weston, Louise Marie

    2007-09-01

    A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This reportmore » reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.« less

  3. Optimization of the bank's operating portfolio

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.; Medvedev, M. A.

    2016-06-01

    The theory of efficient portfolios developed by Markowitz is used to optimize the structure of the types of financial operations of a bank (bank portfolio) in order to increase the profit and reduce the risk. The focus of this paper is to check the stability of the model to errors in the original data.

  4. An organizational approach to understanding patient safety and medical errors.

    PubMed

    Kaissi, Amer

    2006-01-01

    Progress in patient safety, or lack thereof, is a cause for great concern. In this article, we argue that the patient safety movement has failed to reach its goals of eradicating or, at least, significantly reducing errors because of an inappropriate focus on provider and patient-level factors with no real attention to the organizational factors that affect patient safety. We describe an organizational approach to patient safety using different organizational theory perspectives and make several propositions to push patient safety research and practice in a direction that is more likely to improve care processes and outcomes. From a Contingency Theory perspective, we suggest that health care organizations, in general, operate under a misfit between contingencies and structures. This misfit is mainly due to lack of flexibility, cost containment, and lack of regulations, thus explaining the high level of errors committed in these organizations. From an organizational culture perspective, we argue that health care organizations must change their assumptions, beliefs, values, and artifacts to change their culture from a culture of blame to a culture of safety and thus reduce medical errors. From an organizational learning perspective, we discuss how reporting, analyzing, and acting on error information can result in reduced errors in health care organizations.

  5. Computer-assisted bar-coding system significantly reduces clinical laboratory specimen identification errors in a pediatric oncology hospital.

    PubMed

    Hayden, Randall T; Patterson, Donna J; Jay, Dennis W; Cross, Carl; Dotson, Pamela; Possel, Robert E; Srivastava, Deo Kumar; Mirro, Joseph; Shenep, Jerry L

    2008-02-01

    To assess the ability of a bar code-based electronic positive patient and specimen identification (EPPID) system to reduce identification errors in a pediatric hospital's clinical laboratory. An EPPID system was implemented at a pediatric oncology hospital to reduce errors in patient and laboratory specimen identification. The EPPID system included bar-code identifiers and handheld personal digital assistants supporting real-time order verification. System efficacy was measured in 3 consecutive 12-month time frames, corresponding to periods before, during, and immediately after full EPPID implementation. A significant reduction in the median percentage of mislabeled specimens was observed in the 3-year study period. A decline from 0.03% to 0.005% (P < .001) was observed in the 12 months after full system implementation. On the basis of the pre-intervention detected error rate, it was estimated that EPPID prevented at least 62 mislabeling events during its first year of operation. EPPID decreased the rate of misidentification of clinical laboratory samples. The diminution of errors observed in this study provides support for the development of national guidelines for the use of bar coding for laboratory specimens, paralleling recent recommendations for medication administration.

  6. The PoET (Prevention of Error-Based Transfers) Project.

    PubMed

    Oliver, Jill; Chidwick, Paula

    2017-01-01

    The PoET (Prevention of Error-based Transfers) Project is one of the Ethics Quality Improvement Projects (EQIPs) taking place at William Osler Health System. This specific project is designed to reduce transfers from long-term care to hospital that are caused by legal and ethical errors related to consent, capacity and substitute decision-making. The project is currently operating in eight long-term care homes in the Central West Local Health Integration Network and has seen a 56% reduction in multiple transfers before death in hospital.

  7. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    PubMed

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  8. Graphical user interface simplifies infusion pump programming and enhances the ability to detect pump-related faults.

    PubMed

    Syroid, Noah; Liu, David; Albert, Robert; Agutter, James; Egan, Talmage D; Pace, Nathan L; Johnson, Ken B; Dowdle, Michael R; Pulsipher, Daniel; Westenskow, Dwayne R

    2012-11-01

    Drug administration errors are frequent and are often associated with the misuse of IV infusion pumps. One source of these errors may be the infusion pump's user interface. We used failure modes-and-effects analyses to identify programming errors and to guide the design of a new syringe pump user interface. We designed the new user interface to clearly show the pump's operating state simultaneously in more than 1 monitoring location. We evaluated anesthesia residents in laboratory and simulated environments on programming accuracy and error detection between the new user interface and the user interface of a commercially available infusion pump. With the new user interface, we observed the number of programming errors reduced by 81%, the number of keystrokes per task reduced from 9.2 ± 5.0 to 7.5 ± 5.5 (mean ± SD), the time required per task reduced from 18.1 ± 14.1 seconds to 10.9 ± 9.5 seconds and significantly less perceived workload. Residents detected 38 of 70 (54%) of the events with the new user interface and 37 of 70 (53%) with the existing user interface, despite no experience with the new user interface and extensive experience with the existing interface. The number of programming errors and workload were reduced partly because it took less time and fewer keystrokes to program the pump when using the new user interface. Despite minimal training, residents quickly identified preexisting infusion pump problems with the new user interface. Intuitive and easy-to-program infusion pump interfaces may reduce drug administration errors and infusion pump-related adverse events.

  9. A lateral guidance algorithm to reduce the post-aerobraking burn requirements for a lift-modulated orbital transfer vehicle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Herman, G. C.

    1986-01-01

    A lateral guidance algorithm which controls the location of the line of intersection between the actual and desired orbital planes (the hinge line) is developed for the aerobraking phase of a lift-modulated orbital transfer vehicle. The on-board targeting algorithm associated with this lateral guidance algorithm is simple and concise which is very desirable since computation time and space are limited on an on-board flight computer. A variational equation which describes the movement of the hinge line is derived. Simple relationships between the plane error, the desired hinge line position, the position out-of-plane error, and the velocity out-of-plane error are found. A computer simulation is developed to test the lateral guidance algorithm for a variety of operating conditions. The algorithm does reduce the total burn magnitude needed to achieve the desired orbit by allowing the plane correction and perigee-raising burn to be combined in a single maneuver. The algorithm performs well under vacuum perigee dispersions, pot-hole density disturbance, and thick atmospheres. The results for many different operating conditions are presented.

  10. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  11. Intra-operative ultrasound-based augmented reality guidance for laparoscopic surgery.

    PubMed

    Singla, Rohit; Edgcumbe, Philip; Pratt, Philip; Nguan, Christopher; Rohling, Robert

    2017-10-01

    In laparoscopic surgery, the surgeon must operate with a limited field of view and reduced depth perception. This makes spatial understanding of critical structures difficult, such as an endophytic tumour in a partial nephrectomy. Such tumours yield a high complication rate of 47%, and excising them increases the risk of cutting into the kidney's collecting system. To overcome these challenges, an augmented reality guidance system is proposed. Using intra-operative ultrasound, a single navigation aid, and surgical instrument tracking, four augmentations of guidance information are provided during tumour excision. Qualitative and quantitative system benefits are measured in simulated robot-assisted partial nephrectomies. Robot-to-camera calibration achieved a total registration error of 1.0 ± 0.4 mm while the total system error is 2.5 ± 0.5 mm. The system significantly reduced healthy tissue excised from an average (±standard deviation) of 30.6 ± 5.5 to 17.5 ± 2.4 cm 3 ( p < 0.05) and reduced the depth from the tumor underside to cut from an average (±standard deviation) of 10.2 ± 4.1 to 3.3 ± 2.3 mm ( p < 0.05). Further evaluation is required in vivo, but the system has promising potential to reduce the amount of healthy parenchymal tissue excised.

  12. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  13. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  14. Study on optimization of the short-term operation of cascade hydropower stations by considering output error

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang

    2017-06-01

    The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.

  15. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    PubMed

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  17. Eliminating US hospital medical errors.

    PubMed

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  18. What Can We Learn From Point-of-Care Blood Glucose Values Deleted and Repeated by Nurses?

    PubMed

    Corl, Dawn; Yin, Tom; Ulibarri, May; Lien, Heather; Tylee, Tracy; Chao, Jing; Wisse, Brent E

    2018-03-01

    Hospitals rely on point-of-care (POC) blood glucose (BG) values to guide important decisions related to insulin administration and glycemic control. Evaluation of POC BG in hospitalized patients is associated with measurement and operator errors. Based on a previous quality improvement (QI) project we introduced an option for operators to delete and repeat POC BG values suspected as erroneous. The current project evaluated our experience with deleted POC BG values over a 2-year period. A retrospective QI project included all patients hospitalized at two regional academic medical centers in the Pacific Northwest during 2014 and 2015. Laboratory Medicine POC BG data were reviewed to evaluate all inpatient episodes of deleted and repeated POC BG. Inpatient operators choose to delete and repeat only 0.8% of all POC BG tests. Hypoglycemic and extreme hyperglycemic BG values are more likely to be deleted and repeated. Of initial values <40 mg/dL, 58% of deleted values (18% of all values) are errors. Of values >400 mg/dL, 40% of deleted values (5% of all values) are errors. Not all repeated POC BG values are first deleted. Optimal use of the option to delete and repeat POC BG values <40 mg/dL could decrease reported rates of severe hypoglycemia by as much as 40%. This project demonstrates that operators are frequently able to identify POC BG values that are measurement/operator errors. Eliminating these errors significantly reduces documented rates of severe hypoglycemia and hyperglycemia, and has the potential to improve patient safety.

  19. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  20. Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.

    PubMed

    Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing

    2016-01-01

    The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.

  1. Checklists and Monitoring in the Cockpit: Why Crucial Defenses Sometimes Fail

    NASA Technical Reports Server (NTRS)

    Dismukes, R. Key; Berman, Ben

    2010-01-01

    Checklists and monitoring are two essential defenses against equipment failures and pilot errors. Problems with checklist use and pilots failures to monitor adequately have a long history in aviation accidents. This study was conducted to explore why checklists and monitoring sometimes fail to catch errors and equipment malfunctions as intended. Flight crew procedures were observed from the cockpit jumpseat during normal airline operations in order to: 1) collect data on monitoring and checklist use in cockpit operations in typical flight conditions; 2) provide a plausible cognitive account of why deviations from formal checklist and monitoring procedures sometimes occur; 3) lay a foundation for identifying ways to reduce vulnerability to inadvertent checklist and monitoring errors; 4) compare checklist and monitoring execution in normal flights with performance issues uncovered in accident investigations; and 5) suggest ways to improve the effectiveness of checklists and monitoring. Cognitive explanations for deviations from prescribed procedures are provided, along with suggestions for countermeasures for vulnerability to error.

  2. Applications systems verification and transfer project. Volume 1: Operational applications of satellite snow cover observations: Executive summary. [usefulness of satellite snow-cover data for water yield prediction

    NASA Technical Reports Server (NTRS)

    Rango, A.

    1981-01-01

    Both LANDSAT and NOAA satellite data were used in improving snowmelt runoff forecasts. When the satellite snow cover data were tested in both empirical seasonal runoff estimation and short term modeling approaches, a definite potential for reducing forecast error was evident. A cost benefit analysis run in conjunction with the snow mapping indicated a $36.5 million annual benefit accruing from a one percent improvement in forecast accuracy using the snow cover data for the western United States. The annual cost of employing the system would be $505,000. The snow mapping has proven that satellite snow cover data can be used to reduce snowmelt runoff forecast error in a cost effective manner once all operational satellite data are available within 72 hours after acquisition. Executive summaries of the individual snow mapping projects are presented.

  3. Cost effectiveness of ergonomic redesign of electronic motherboard.

    PubMed

    Sen, Rabindra Nath; Yeow, Paul H P

    2003-09-01

    A case study to illustrate the cost effectiveness of ergonomic redesign of electronic motherboard was presented. The factory was running at a loss due to the high costs of rejects and poor quality and productivity. Subjective assessments and direct observations were made on the factory. Investigation revealed that due to motherboard design errors, the machine had difficulty in placing integrated circuits onto the pads, the operators had much difficulty in manual soldering certain components and much unproductive manual cleaning (MC) was required. Consequently, there were high rejects and occupational health and safety (OHS) problems, such as, boredom and work discomfort. Also, much labour and machine costs were spent on repairs. The motherboard was redesigned to correct the design errors, to allow more components to be machine soldered and to reduce MC. This eliminated rejects, reduced repairs, saved US dollars 581495/year and improved operators' OHS. The customer also saved US dollars 142105/year on loss of business.

  4. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  5. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  6. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  7. Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng

    2015-08-15

    This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less

  8. Assimilation of Freeze - Thaw Observations into the NASA Catchment Land Surface Model

    NASA Technical Reports Server (NTRS)

    Farhadi, Leila; Reichle, Rolf H.; DeLannoy, Gabrielle J. M.; Kimball, John S.

    2014-01-01

    The land surface freeze-thaw (F-T) state plays a key role in the hydrological and carbon cycles and thus affects water and energy exchanges and vegetation productivity at the land surface. In this study, we developed an F-T assimilation algorithm for the NASA Goddard Earth Observing System, version 5 (GEOS-5) modeling and assimilation framework. The algorithm includes a newly developed observation operator that diagnoses the landscape F-T state in the GEOS-5 Catchment land surface model. The F-T analysis is a rule-based approach that adjusts Catchment model state variables in response to binary F-T observations, while also considering forecast and observation errors. A regional observing system simulation experiment was conducted using synthetically generated F-T observations. The assimilation of perfect (error-free) F-T observations reduced the root-mean-square errors (RMSE) of surface temperature and soil temperature by 0.206 C and 0.061 C, respectively, when compared to model estimates (equivalent to a relative RMSE reduction of 6.7 percent and 3.1 percent, respectively). For a maximum classification error (CEmax) of 10 percent in the synthetic F-T observations, the F-T assimilation reduced the RMSE of surface temperature and soil temperature by 0.178 C and 0.036 C, respectively. For CEmax=20 percent, the F-T assimilation still reduces the RMSE of model surface temperature estimates by 0.149 C but yields no improvement over the model soil temperature estimates. The F-T assimilation scheme is being developed to exploit planned operational F-T products from the NASA Soil Moisture Active Passive (SMAP) mission.

  9. Complete Tri-Axis Magnetometer Calibration with a Gyro Auxiliary

    PubMed Central

    Yang, Deng; You, Zheng; Li, Bin; Duan, Wenrui; Yuan, Binwen

    2017-01-01

    Magnetometers combined with inertial sensors are widely used for orientation estimation, and calibrations are necessary to achieve high accuracy. This paper presents a complete tri-axis magnetometer calibration algorithm with a gyro auxiliary. The magnetic distortions and sensor errors, including the misalignment error between the magnetometer and assembled platform, are compensated after calibration. With the gyro auxiliary, the magnetometer linear interpolation outputs are calculated, and the error parameters are evaluated under linear operations of magnetometer interpolation outputs. The simulation and experiment are performed to illustrate the efficiency of the algorithm. After calibration, the heading errors calculated by magnetometers are reduced to 0.5° (1σ). This calibration algorithm can also be applied to tri-axis accelerometers whose error model is similar to tri-axis magnetometers. PMID:28587115

  10. An Investigation into Soft Error Detection Efficiency at Operating System Level

    PubMed Central

    Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894

  11. An investigation into soft error detection efficiency at operating system level.

    PubMed

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  12. Normal accidents: human error and medical equipment design.

    PubMed

    Dain, Steven

    2002-01-01

    High-risk systems, which are typical of our technologically complex era, include not just nuclear power plants but also hospitals, anesthesia systems, and the practice of medicine and perfusion. In high-risk systems, no matter how effective safety devices are, some types of accidents are inevitable because the system's complexity leads to multiple and unexpected interactions. It is important for healthcare providers to apply a risk assessment and management process to decisions involving new equipment and procedures or staffing matters in order to minimize the residual risks of latent errors, which are amenable to correction because of the large window of opportunity for their detection. This article provides an introduction to basic risk management and error theory principles and examines ways in which they can be applied to reduce and mitigate the inevitable human errors that accompany high-risk systems. The article also discusses "human factor engineering" (HFE), the process which is used to design equipment/ human interfaces in order to mitigate design errors. The HFE process involves interaction between designers and endusers to produce a series of continuous refinements that are incorporated into the final product. The article also examines common design problems encountered in the operating room that may predispose operators to commit errors resulting in harm to the patient. While recognizing that errors and accidents are unavoidable, organizations that function within a high-risk system must adopt a "safety culture" that anticipates problems and acts aggressively through an anonymous, "blameless" reporting mechanism to resolve them. We must continuously examine and improve the design of equipment and procedures, personnel, supplies and materials, and the environment in which we work to reduce error and minimize its effects. Healthcare providers must take a leading role in the day-to-day management of the "Perioperative System" and be a role model in promoting a culture of safety in their organizations.

  13. Attitude Control Subsystem for the Advanced Communications Technology Satellite

    NASA Technical Reports Server (NTRS)

    Hewston, Alan W.; Mitchell, Kent A.; Sawicki, Jerzy T.

    1996-01-01

    This paper provides an overview of the on-orbit operation of the Attitude Control Subsystem (ACS) for the Advanced Communications Technology Satellite (ACTS). The three ACTS control axes are defined, including the means for sensing attitude and determining the pointing errors. The desired pointing requirements for various modes of control as well as the disturbance torques that oppose the control are identified. Finally, the hardware actuators and control loops utilized to reduce the attitude error are described.

  14. Refractive index of He in the region 920-1910 A

    NASA Technical Reports Server (NTRS)

    Huber, M. C. E.; Tondello, G.

    1974-01-01

    The refractive index of He has been determined in the region 920-1910 A by measurements of wavelength shifts in a 3-m spectrograph alternately filled with He and evacuated. Differential pumping systems were used to allow operation of the light source at conveniently low pressures. Several plates were measured and analyzed in order to reduce statistical errors. The results at 919 A agree with the theory within 1%, i.e., less than the experimental error.

  15. Non-integer expansion embedding techniques for reversible image watermarking

    NASA Astrophysics Data System (ADS)

    Xiang, Shijun; Wang, Yi

    2015-12-01

    This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.

  16. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  17. Methods and devices for optimizing the operation of a semiconductor optical modulator

    DOEpatents

    Zortman, William A.

    2015-07-14

    A semiconductor-based optical modulator includes a control loop to control and optimize the modulator's operation for relatively high data rates (above 1 GHz) and/or relatively high voltage levels. Both the amplitude of the modulator's driving voltage and the bias of the driving voltage may be adjusted using the control loop. Such adjustments help to optimize the operation of the modulator by reducing the number of errors present in a modulated data stream.

  18. Heuristics and Cognitive Error in Medical Imaging.

    PubMed

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  19. Temporal-difference prediction errors and Pavlovian fear conditioning: role of NMDA and opioid receptors.

    PubMed

    Cole, Sindy; McNally, Gavan P

    2007-10-01

    Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).

  20. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2014-01-01

    Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.

  1. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  2. Unreliable numbers: error and harm induced by bad design can be reduced by better design

    PubMed Central

    Thimbleby, Harold; Oladimeji, Patrick; Cairns, Paul

    2015-01-01

    Number entry is a ubiquitous activity and is often performed in safety- and mission-critical procedures, such as healthcare, science, finance, aviation and in many other areas. We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems. A surprising finding is that many common, widely used systems are defective, and induce unnecessary human error. We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error. We demonstrate novel designs with improved resilience, implying that the common problems identified and the errors they induce are avoidable. PMID:26354830

  3. Error detection and reduction in blood banking.

    PubMed

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.

  4. Partial pressure analysis in space testing

    NASA Technical Reports Server (NTRS)

    Tilford, Charles R.

    1994-01-01

    For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.

  5. An interventional approach for patient and nurse safety: a fatigue countermeasures feasibility study.

    PubMed

    Scott, Linda D; Hofmeister, Nancee; Rogness, Neal; Rogers, Ann E

    2010-01-01

    Studies indicate that extended shifts worked by hospital staff nurses are associated with higher risk of errors. Long work hours coupled with insufficient sleep and fatigue are even riskier. Although other industries have developed programs to reduce fatigue-related errors and injury, fatigue countermeasures program for nurses (FCMPN) are lacking. The objective of this study was to evaluate the feasibility of an FCMPN for improving sleep duration and quality while reducing daytime sleepiness and patient care errors. Selected sleep variables, errors and drowsy driving, were evaluated among hospital staff nurses (n = 47) before and after FCMPN implementation. A one-group pretest-posttest repeated-measures approach was used. Participants provided data 2 weeks before the FCMPN, 4 weeks after receiving the intervention, and again at 3 months after intervention. Most of the nurses experienced poor sleep quality, severe daytime sleepiness, and decreased alertness at work and while operating a motor vehicle. After the FCMPN, significant improvements were noted in sleep duration, sleep quality, alertness, and error prevention. Although significant improvements were not found in daytime sleepiness scores, severity of daytime sleepiness appeared to decrease. Despite improvements in fatigue management, nurses reported feelings of guilt when engaging in FCMPN activities, especially strategic naps and relieved breaks. Initial findings support the feasibility of using an FCMPN for mitigating fatigue, improving sleep, and reducing errors among hospital staff nurses. In future investigations, the acceptability, efficacy, and effectiveness of FCMPNs can be examined.

  6. Evaluating the Impact of Radio Frequency Identification Retained Surgical Instruments Tracking on Patient Safety: Literature Review.

    PubMed

    Schnock, Kumiko O; Biggs, Bonnie; Fladger, Anne; Bates, David W; Rozenblum, Ronen

    2017-02-22

    Retained surgical instruments (RSI) are one of the most serious preventable complications in operating room settings, potentially leading to profound adverse effects for patients, as well as costly legal and financial consequences for hospitals. Safety measures to eliminate RSIs have been widely adopted in the United States and abroad, but despite widespread efforts, medical errors with RSI have not been eliminated. Through a systematic review of recent studies, we aimed to identify the impact of radio frequency identification (RFID) technology on reducing RSI errors and improving patient safety. A literature search on the effects of RFID technology on RSI error reduction was conducted in PubMed and CINAHL (2000-2016). Relevant articles were selected and reviewed by 4 researchers. After the literature search, 385 articles were identified and the full texts of the 88 articles were assessed for eligibility. Of these, 5 articles were included to evaluate the benefits and drawbacks of using RFID for preventing RSI-related errors. The use of RFID resulted in rapid detection of RSI through body tissue with high accuracy rates, reducing risk of counting errors and improving workflow. Based on the existing literature, RFID technology seems to have the potential to substantially improve patient safety by reducing RSI errors, although the body of evidence is currently limited. Better designed research studies are needed to get a clear understanding of this domain and to find new opportunities to use this technology and improve patient safety.

  7. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  8. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  9. Recognizing and Reducing Analytical Errors and Sources of Variation in Clinical Pathology Data in Safety Assessment Studies.

    PubMed

    Schultze, A E; Irizarry, A R

    2017-02-01

    Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.

  10. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    NASA Astrophysics Data System (ADS)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  11. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  12. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  13. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  14. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  15. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  16. Improving operational flood ensemble prediction by the assimilation of satellite soil moisture: comparison between lumped and semi-distributed schemes

    NASA Astrophysics Data System (ADS)

    Alvarez-Garreton, C.; Ryu, D.; Western, A. W.; Su, C.-H.; Crow, W. T.; Robertson, D. E.; Leahy, C.

    2014-09-01

    Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool. Within this context, we assimilate active and passive satellite soil moisture (SSM) retrievals using an ensemble Kalman filter to improve operational flood prediction within a large semi-arid catchment in Australia (>40 000 km2). We assess the importance of accounting for channel routing and the spatial distribution of forcing data by applying SM-DA to a lumped and a semi-distributed scheme of the probability distributed model (PDM). Our scheme also accounts for model error representation and seasonal biases and errors in the satellite data. Before assimilation, the semi-distributed model provided more accurate streamflow prediction (Nash-Sutcliffe efficiency, NS = 0.77) than the lumped model (NS = 0.67) at the catchment outlet. However, this did not ensure good performance at the "ungauged" inner catchments. After SM-DA, the streamflow ensemble prediction at the outlet was improved in both the lumped and the semi-distributed schemes: the root mean square error of the ensemble was reduced by 27 and 31%, respectively; the NS of the ensemble mean increased by 7 and 38%, respectively; the false alarm ratio was reduced by 15 and 25%, respectively; and the ensemble prediction spread was reduced while its reliability was maintained. Our findings imply that even when rainfall is the main driver of flooding in semi-arid catchments, adequately processed SSM can be used to reduce errors in the model soil moisture, which in turn provides better streamflow ensemble prediction. We demonstrate that SM-DA efficacy is enhanced when the spatial distribution in forcing data and routing processes are accounted for. At ungauged locations, SM-DA is effective at improving streamflow ensemble prediction, however, the updated prediction is still poor since SM-DA does not address systematic errors in the model.

  17. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  18. Modeling and controller design of a 6-DOF precision positioning system

    NASA Astrophysics Data System (ADS)

    Cai, Kunhai; Tian, Yanling; Liu, Xianping; Fatikow, Sergej; Wang, Fujun; Cui, Liangyu; Zhang, Dawei; Shirinzadeh, Bijan

    2018-05-01

    A key hurdle to meet the needs of micro/nano manipulation in some complex cases is the inadequate workspace and flexibility of the operation ends. This paper presents a 6-degree of freedom (DOF) serial-parallel precision positioning system, which consists of two compact type 3-DOF parallel mechanisms. Each parallel mechanism is driven by three piezoelectric actuators (PEAs), guided by three symmetric T-shape hinges and three elliptical flexible hinges, respectively. It can extend workspace and improve flexibility of the operation ends. The proposed system can be assembled easily, which will greatly reduce the assembly errors and improve the positioning accuracy. In addition, the kinematic and dynamic model of the 6-DOF system are established, respectively. Furthermore, in order to reduce the tracking error and improve the positioning accuracy, the Discrete-time Model Predictive Controller (DMPC) is applied as an effective control method. Meanwhile, the effectiveness of the DMCP control method is verified. Finally, the tracking experiment is performed to verify the tracking performances of the 6-DOF stage.

  19. A Work Station For Control Of Changing Systems

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel J.

    1988-01-01

    Touch screen and microcomputer enable flexible control of complicated systems. Computer work station equipped to produce graphical displays used as command panel and status indicator for command-and-control system. Operator uses images of control buttons displayed on touch screen to send prestored commands. Use of prestored library of commands reduces incidence of errors. If necessary, operator uses conventional keyboard to enter commands in real time to handle unforeseeable situations.

  20. Cost effectiveness of the stream-gaging program in Nevada

    USGS Publications Warehouse

    Arteaga, F.E.

    1990-01-01

    The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)

  1. The Use of Categorized Time-Trend Reporting of Radiation Oncology Incidents: A Proactive Analytical Approach to Improving Quality and Safety Over Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Anthony, E-mail: anthony.arnold@sesiahs.health.nsw.gov.a; Delaney, Geoff P.; Cassapi, Lynette

    Purpose: Radiotherapy is a common treatment for cancer patients. Although incidence of error is low, errors can be severe or affect significant numbers of patients. In addition, errors will often not manifest until long periods after treatment. This study describes the development of an incident reporting tool that allows categorical analysis and time trend reporting, covering first 3 years of use. Methods and Materials: A radiotherapy-specific incident analysis system was established. Staff members were encouraged to report actual errors and near-miss events detected at prescription, simulation, planning, or treatment phases of radiotherapy delivery. Trend reporting was reviewed monthly. Results: Reportsmore » were analyzed for the first 3 years of operation (May 2004-2007). A total of 688 reports was received during the study period. The actual error rate was 0.2% per treatment episode. During the study period, the actual error rates reduced significantly from 1% per year to 0.3% per year (p < 0.001), as did the total event report rates (p < 0.0001). There were 3.5 times as many near misses reported compared with actual errors. Conclusions: This system has allowed real-time analysis of events within a radiation oncology department to a reduced error rate through focus on learning and prevention from the near-miss reports. Plans are underway to develop this reporting tool for Australia and New Zealand.« less

  2. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  3. Simultaneous localization and calibration for electromagnetic tracking systems.

    PubMed

    Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor

    2016-06-01

    In clinical environments, field distortion can cause significant electromagnetic tracking errors. Therefore, dynamic calibration of electromagnetic tracking systems is essential to compensate for measurement errors. It is proposed to integrate the motion model of the tracked instrument with redundant EM sensor observations and to apply a simultaneous localization and mapping algorithm in order to accurately estimate the pose of the instrument and create a map of the field distortion in real-time. Experiments were conducted in the presence of ferromagnetic and electrically-conductive field distorting objects and results compared with those of a conventional sensor fusion approach. The proposed method reduced the tracking error from 3.94±1.61 mm to 1.82±0.62 mm in the presence of steel, and from 0.31±0.22 mm to 0.11±0.14 mm in the presence of aluminum. With reduced tracking error and independence from external tracking devices or pre-operative calibrations, the approach is promising for reliable EM navigation in various clinical procedures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Combining empirical approaches and error modelling to enhance predictive uncertainty estimation in extrapolation for operational flood forecasting. Tests on flood events on the Loire basin, France.

    NASA Astrophysics Data System (ADS)

    Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles

    2017-04-01

    An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973

  5. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  6. The design and analysis of single flank transmission error testor for loaded gears

    NASA Technical Reports Server (NTRS)

    Houser, D. R.; Bassett, D. E.

    1985-01-01

    Due to geometrical imperfections in gears and finite tooth stiffnesses, the motion transmitted from an input gear shaft to an output gear shaft will not have conjugate action. In order to strengthen the understanding of transmission error and to verify mathematical models of gear transmission error, a test stand that will measure the transmission error of a gear pair at operating loads, but at reduced speeds would be desirable. This document describes the design and development of a loaded transmission error tester. For a gear box with a gear ratio of one, few tooth meshing combinations will occur during a single test. In order to observe the effects of different tooth mesh combinations and to increase the ability to load test gear pairs with higher gear ratios, the system was designed around a gear box with a gear ratio of two.

  7. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  8. Finite-element time evolution operator for the anharmonic oscillator

    NASA Technical Reports Server (NTRS)

    Milton, Kimball A.

    1995-01-01

    The finite-element approach to lattice field theory is both highly accurate (relative errors approximately 1/N(exp 2), where N is the number of lattice points) and exactly unitary (in the sense that canonical commutation relations are exactly preserved at the lattice sites). In this talk I construct matrix elements for dynamical variables and for the time evolution operator for the anharmonic oscillator, for which the continuum Hamiltonian is H = p(exp 2)/2 + lambda q(exp 4)/4. Construction of such matrix elements does not require solving the implicit equations of motion. Low order approximations turn out to be extremely accurate. For example, the matrix element of the time evolution operator in the harmonic oscillator ground state gives a results for the anharmonic oscillator ground state energy accurate to better than 1 percent, while a two-state approximation reduces the error to less than 0.1 percent.

  9. Human performance in the modern cockpit

    NASA Technical Reports Server (NTRS)

    Dismukes, R. K.; Cohen, M. M.

    1992-01-01

    This panel was organized by the Aerospace Human Factors Committee to illustrate behavioral research on the perceptual, cognitive, and group processes that determine crew effectiveness in modern cockpits. Crew reactions to the introduction of highly automated systems in the cockpit will be reported on. Automation can improve operational capabilities and efficiency and can reduce some types of human error, but may also introduce entirely new opportunities for error. The problem solving and decision making strategies used by crews led by captains with various personality profiles will be discussed. Also presented will be computational approaches to modeling the cognitive demands of cockpit operations and the cognitive capabilities and limitations of crew members. Factors contributing to aircrew deviations from standard operating procedures and misuse of checklist, often leading to violations, incidents, or accidents will be examined. The mechanisms of visual perception pilots use in aircraft control and the implications of these mechanisms for effective design of visual displays will be discussed.

  10. Operational Activations Of Maritime Surveillance Services Within The Framework Of MARISS, NEREIDS And SAGRES Projects

    NASA Astrophysics Data System (ADS)

    Margarit, G.

    2013-12-01

    This paper presents the results obtained by GMV in the maritime surveillance operational activations conducted in a set of research projects. These activations have been actively supported by users, which feedback has been essential for better understanding their needs and the most urgent requested improvements. Different domains have been evaluated from pure theoretical and scientific background (in terms of processing algorithms) up to pure logistic issues (IT configuration issues, strategies for improving system performance and avoiding bottlenecks, parallelization and back-up procedures). In all the cases, automatizing is the key work because users need almost real time operations where the interaction of human operators is minimized. In addition, automatizing permits reducing human-derived errors and provides better error tracking procedures. In the paper, different examples will be depicted and analysed. For sake of space limitation, only the most representative ones will be selected. Feedback from users will be include and analysed as well.

  11. Pulsed UV laser technologies for ophthalmic surgery

    NASA Astrophysics Data System (ADS)

    Razhev, A. M.; Chernykh, V. V.; Bagayev, S. N.; Churkin, D. S.; Kargapol'tsev, E. S.; Iskakov, I. A.; Ermakova, O. V.

    2017-01-01

    The paper provides an overview of the results of multiyear joint researches of team of collaborators of Institute of Laser Physics SB RAS together with NF IRTC “Eye Microsurgery” for the period from 1988 to the present, in which were first proposed and experimentally realized laser medical technologies for correction of refractive errors of known today as LASIK, the treatment of ophthalmic herpes and open-angle glaucoma. It is proposed to carry out operations for the correction of refractive errors the use of UV excimer KrCl laser with a wavelength of 222 nm. The same laser emission is the most suitable for the treatment of ophthalmic herpes, because it has a high clinical effect, combined with many years of absence of recrudescence. A minimally invasive technique of glaucoma operations using excimer XeCl laser (λ=308 nm) is developed. Its wavelength allows perform all stages of glaucoma operations, while the laser head itself has high stability and lifetime, will significantly reduce operating costs, compared with other types of lasers.

  12. Design and simulation of sensor networks for tracking Wifi users in outdoor urban environments

    NASA Astrophysics Data System (ADS)

    Thron, Christopher; Tran, Khoi; Smith, Douglas; Benincasa, Daniel

    2017-05-01

    We present a proof-of-concept investigation into the use of sensor networks for tracking of WiFi users in outdoor urban environments. Sensors are fixed, and are capable of measuring signal power from users' WiFi devices. We derive a maximum likelihood estimate for user location based on instantaneous sensor power measurements. The algorithm takes into account the effects of power control, and is self-calibrating in that the signal power model used by the location algorithm is adjusted and improved as part of the operation of the network. Simulation results to verify the system's performance are presented. The simulation scenario is based on a 1.5 km2 area of lower Manhattan, The self-calibration mechanism was verified for initial rms (root mean square) errors of up to 12 dB in the channel power estimates: rms errors were reduced by over 60% in 300 track-hours, in systems with limited power control. Under typical operating conditions with (without) power control, location rms errors are about 8.5 (5) meters with 90% accuracy within 9 (13) meters, for both pedestrian and vehicular users. The distance error distributions for smaller distances (<30 m) are well-approximated by an exponential distribution, while the distributions for large distance errors have fat tails. The issue of optimal sensor placement in the sensor network is also addressed. We specify a linear programming algorithm for determining sensor placement for networks with reduced number of sensors. In our test case, the algorithm produces a network with 18.5% fewer sensors with comparable accuracy estimation performance. Finally, we discuss future research directions for improving the accuracy and capabilities of sensor network systems in urban environments.

  13. Managing human error in aviation.

    PubMed

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  14. Low Power Operation of Temperature-Modulated Metal Oxide Semiconductor Gas Sensors.

    PubMed

    Burgués, Javier; Marco, Santiago

    2018-01-25

    Mobile applications based on gas sensing present new opportunities for low-cost air quality monitoring, safety, and healthcare. Metal oxide semiconductor (MOX) gas sensors represent the most prominent technology for integration into portable devices, such as smartphones and wearables. Traditionally, MOX sensors have been continuously powered to increase the stability of the sensing layer. However, continuous power is not feasible in many battery-operated applications due to power consumption limitations or the intended intermittent device operation. This work benchmarks two low-power, duty-cycling, and on-demand modes against the continuous power one. The duty-cycling mode periodically turns the sensors on and off and represents a trade-off between power consumption and stability. On-demand operation achieves the lowest power consumption by powering the sensors only while taking a measurement. Twelve thermally modulated SB-500-12 (FIS Inc. Jacksonville, FL, USA) sensors were exposed to low concentrations of carbon monoxide (0-9 ppm) with environmental conditions, such as ambient humidity (15-75% relative humidity) and temperature (21-27 °C), varying within the indicated ranges. Partial Least Squares (PLS) models were built using calibration data, and the prediction error in external validation samples was evaluated during the two weeks following calibration. We found that on-demand operation produced a deformation of the sensor conductance patterns, which led to an increase in the prediction error by almost a factor of 5 as compared to continuous operation (2.2 versus 0.45 ppm). Applying a 10% duty-cycling operation of 10-min periods reduced this prediction error to a factor of 2 (0.9 versus 0.45 ppm). The proposed duty-cycling powering scheme saved up to 90% energy as compared to the continuous operating mode. This low-power mode may be advantageous for applications that do not require continuous and periodic measurements, and which can tolerate slightly higher prediction errors.

  15. Error Rates in Users of Automatic Face Recognition Software

    PubMed Central

    White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631

  16. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  17. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  18. Crew resource management: using aviation techniques to improve operating room safety.

    PubMed

    Ricci, Michael A; Brumsted, John R

    2012-04-01

    Since the publication of the Institute of Medicine report estimating nearly 100,000 deaths per year from medical errors, hospitals and physicians have a renewed focus upon error reduction. We implemented a surgical crew resource management (CRM) program for all operating room (OR) personnel. In our academic medical center, 19,000 procedures per year are performed in 27 operating rooms. Mandatory CRM training was implemented for all peri-operative personnel. Aviation techniques introduced included a pre-operative checklist and brief, post-operative debrief, read and initial files, and various other aviation-based techniques. Compliance with conduct of the brief/debrief was monitored as well as wrong-site surgeries and retained foreign body events. The malpractice insurance database for claims was also queried for the period prior to and after training. Initial training was accomplished for 517 people, including all anesthesiologists, surgeons, nurses, technicians, and OR assistants. Pre-operative briefing increased from 6.7 to 99% within 4 mo. Wrong site surgeries and retained foreign bodies decreased from a high of seven in 2007 to none in 2008, but, after 14 mo without additional training, these rose to five in 2009. Malpractice expenses (payouts and legal fees) totaled $793,000 (2003-2007), but have been zero since 2008. CRM training and implementation had an impact on reducing the incidence of wrong site surgery and retained foreign bodies in our operating rooms. However, constant reinforcement and refresher training is necessary for sustained results. Though no one technique can prevent all errors, CRM can effect culture change, producing a safer environment.

  19. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  20. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  1. Energy-efficient quantum computing

    NASA Astrophysics Data System (ADS)

    Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko

    2017-04-01

    In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.

  2. APOLLO clock performance and normal point corrections

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Murphy, T. W., Jr.; Colmenares, N. R.; Battat, J. B. R.

    2017-12-01

    The Apache point observatory lunar laser-ranging operation (APOLLO) has produced a large volume of high-quality lunar laser ranging (LLR) data since it began operating in 2006. For most of this period, APOLLO has relied on a GPS-disciplined, high-stability quartz oscillator as its frequency and time standard. The recent addition of a cesium clock as part of a timing calibration system initiated a comparison campaign between the two clocks. This has allowed correction of APOLLO range measurements—called normal points—during the overlap period, but also revealed a mechanism to correct for systematic range offsets due to clock errors in historical APOLLO data. Drift of the GPS clock on  ∼1000 s timescales contributed typically 2.5 mm of range error to APOLLO measurements, and we find that this may be reduced to  ∼1.6 mm on average. We present here a characterization of APOLLO clock errors, the method by which we correct historical data, and the resulting statistics.

  3. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    NASA Astrophysics Data System (ADS)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  4. Virtual Reality Robotic Surgery Warm-Up Improves Task Performance in a Dry Lab Environment: A Prospective Randomized Controlled Study

    PubMed Central

    Lendvay, Thomas S.; Brand, Timothy C.; White, Lee; Kowalewski, Timothy; Jonnadula, Saikiran; Mercer, Laina; Khorsand, Derek; Andros, Justin; Hannaford, Blake; Satava, Richard M.

    2014-01-01

    Background Pre-operative simulation “warm-up” has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors. Study Design In a two-center randomized trial, fifty-one residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot. Once successfully achieving performance benchmarks, surgeons were randomized to either receive a 3-5 minute VR simulator warm-up or read a leisure book for 10 minutes prior to performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical and cognitive errors. Results Task time (-29.29sec, p=0.001, 95%CI-47.03,-11.56), path length (-79.87mm, p=0.014, 95%CI -144.48,-15.25), and cognitive errors were reduced in the warm-up group compared to the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32, p=0.020, 95%CI 0.06,0.59) were reduced after the dissimilar VR task. When surgeons were stratified by prior robotic and laparoscopic clinical experience, the more experienced surgeons(n=17) demonstrated significant improvements from warm-up in task time (-53.5sec, p=0.001, 95%CI -83.9,-23.0) and economy of motion (0.63mm/sec, p=0.007, 95%CI 0.18,1.09), whereas improvement in these metrics was not statistically significantly appreciated in the less experienced cohort(n=34). Conclusions We observed a significant performance improvement and error reduction rate among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing) suggesting the generalizability of the warm-up. PMID:23583618

  5. Ultrasound fusion image error correction using subject-specific liver motion model and automatic image registration.

    PubMed

    Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi

    2016-12-01

    Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less

  7. Determining the optimal window length for pattern recognition-based myoelectric control: balancing the competing effects of classification error and controller delay.

    PubMed

    Smith, Lauren H; Hargrove, Levi J; Lock, Blair A; Kuiken, Todd A

    2011-04-01

    Pattern recognition-based control of myoelectric prostheses has shown great promise in research environments, but has not been optimized for use in a clinical setting. To explore the relationship between classification error, controller delay, and real-time controllability, 13 able-bodied subjects were trained to operate a virtual upper-limb prosthesis using pattern recognition of electromyogram (EMG) signals. Classification error and controller delay were varied by training different classifiers with a variety of analysis window lengths ranging from 50 to 550 ms and either two or four EMG input channels. Offline analysis showed that classification error decreased with longer window lengths (p < 0.01 ). Real-time controllability was evaluated with the target achievement control (TAC) test, which prompted users to maneuver the virtual prosthesis into various target postures. The results indicated that user performance improved with lower classification error (p < 0.01 ) and was reduced with longer controller delay (p < 0.01 ), as determined by the window length. Therefore, both of these effects should be considered when choosing a window length; it may be beneficial to increase the window length if this results in a reduced classification error, despite the corresponding increase in controller delay. For the system employed in this study, the optimal window length was found to be between 150 and 250 ms, which is within acceptable controller delays for conventional multistate amplitude controllers.

  8. Effect of slope errors on the performance of mirrors for x-ray free electron laser applications

    DOE PAGES

    Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P.

    2015-12-02

    In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help tomore » correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.« less

  9. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  10. Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.

    PubMed

    Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P

    2015-12-14

    In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.

  11. Reduced circuit implementation of encoder and syndrome generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trager, Barry M; Winograd, Shmuel

    An error correction method and system includes an Encoder and Syndrome-generator that operate in parallel to reduce the amount of circuitry used to compute check symbols and syndromes for error correcting codes. The system and method computes the contributions to the syndromes and check symbols 1 bit at a time instead of 1 symbol at a time. As a result, the even syndromes can be computed as powers of the odd syndromes. Further, the system assigns symbol addresses so that there are, for an example GF(2.sup.8) which has 72 symbols, three (3) blocks of addresses which differ by a cubemore » root of unity to allow the data symbols to be combined for reducing size and complexity of odd syndrome circuits. Further, the implementation circuit for generating check symbols is derived from syndrome circuit using the inverse of the part of the syndrome matrix for check locations.« less

  12. Analyzing Effect of System Inertia on Grid Frequency Forecasting Usnig Two Stage Neuro-Fuzzy System

    NASA Astrophysics Data System (ADS)

    Chourey, Divyansh R.; Gupta, Himanshu; Kumar, Amit; Kumar, Jitesh; Kumar, Anand; Mishra, Anup

    2018-04-01

    Frequency forecasting is an important aspect of power system operation. The system frequency varies with load-generation imbalance. Frequency variation depends upon various parameters including system inertia. System inertia determines the rate of fall of frequency after the disturbance in the grid. Though, inertia of the system is not considered while forecasting the frequency of power system during planning and operation. This leads to significant errors in forecasting. In this paper, the effect of inertia on frequency forecasting is analysed for a particular grid system. In this paper, a parameter equivalent to system inertia is introduced. This parameter is used to forecast the frequency of a typical power grid for any instant of time. The system gives appreciable result with reduced error.

  13. Geostationary Operational Environmental Satellite (GOES-N report). Volume 2: Technical appendix

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The contents include: operation with inclinations up to 3.5 deg to extend life; earth sensor improvements to reduce noise; sensor configurations studied; momentum management system design; reaction wheel induced dynamic interaction; controller design; spacecraft motion compensation; analog filtering; GFRP servo design - modern control approach; feedforward compensation as applied to GOES-1 sounder; discussion of allocation of navigation, inframe registration and image-to-image error budget overview; and spatial response and cloud smearing study.

  14. Optimized tomography of continuous variable systems using excitation counting

    NASA Astrophysics Data System (ADS)

    Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang

    2016-11-01

    We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

  15. Cross-coupled control for all-terrain rovers.

    PubMed

    Reina, Giulio

    2013-01-08

    Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors.

  16. Wind Energy Forecasting: A Collaboration of the National Center for Atmospheric Research (NCAR) and Xcel Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parks, K.; Wan, Y. H.; Wiener, G.

    2011-10-01

    The focus of this report is the wind forecasting system developed during this contract period with results of performance through the end of 2010. The report is intentionally high-level, with technical details disseminated at various conferences and academic papers. At the end of 2010, Xcel Energy managed the output of 3372 megawatts of installed wind energy. The wind plants span three operating companies1, serving customers in eight states2, and three market structures3. The great majority of the wind energy is contracted through power purchase agreements (PPAs). The remainder is utility owned, Qualifying Facilities (QF), distributed resources (i.e., 'behind the meter'),more » or merchant entities within Xcel Energy's Balancing Authority footprints. Regardless of the contractual or ownership arrangements, the output of the wind energy is balanced by Xcel Energy's generation resources that include fossil, nuclear, and hydro based facilities that are owned or contracted via PPAs. These facilities are committed and dispatched or bid into day-ahead and real-time markets by Xcel Energy's Commercial Operations department. Wind energy complicates the short and long-term planning goals of least-cost, reliable operations. Due to the uncertainty of wind energy production, inherent suboptimal commitment and dispatch associated with imperfect wind forecasts drives up costs. For example, a gas combined cycle unit may be turned on, or committed, in anticipation of low winds. The reality is winds stayed high, forcing this unit and others to run, or be dispatched, to sub-optimal loading positions. In addition, commitment decisions are frequently irreversible due to minimum up and down time constraints. That is, a dispatcher lives with inefficient decisions made in prior periods. In general, uncertainty contributes to conservative operations - committing more units and keeping them on longer than may have been necessary for purposes of maintaining reliability. The downside is costs are higher. In organized electricity markets, units that are committed for reliability reasons are paid their offer price even when prevailing market prices are lower. Often, these uplift charges are allocated to market participants that caused the inefficient dispatch in the first place. Thus, wind energy facilities are burdened with their share of costs proportional to their forecast errors. For Xcel Energy, wind energy uncertainty costs manifest depending on specific market structures. In the Public Service of Colorado (PSCo), inefficient commitment and dispatch caused by wind uncertainty increases fuel costs. Wind resources participating in the Midwest Independent System Operator (MISO) footprint make substantial payments in the real-time markets to true-up their day-ahead positions and are additionally burdened with deviation charges called a Revenue Sufficiency Guarantee (RSG) to cover out of market costs associated with operations. Southwest Public Service (SPS) wind plants cause both commitment inefficiencies and are charged Southwest Power Pool (SPP) imbalance payments due to wind uncertainty and variability. Wind energy forecasting helps mitigate these costs. Wind integration studies for the PSCo and Northern States Power (NSP) operating companies have projected increasing costs as more wind is installed on the system due to forecast error. It follows that reducing forecast error would reduce these costs. This is echoed by large scale studies in neighboring regions and states that have recommended adoption of state-of-the-art wind forecasting tools in day-ahead and real-time planning and operations. Further, Xcel Energy concluded reduction of the normalized mean absolute error by one percent would have reduced costs in 2008 by over $1 million annually in PSCo alone. The value of reducing forecast error prompted Xcel Energy to make substantial investments in wind energy forecasting research and development.« less

  17. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  18. Causes and Prevention of Laparoscopic Bile Duct Injuries

    PubMed Central

    Way, Lawrence W.; Stewart, Lygia; Gantert, Walter; Liu, Kingsway; Lee, Crystine M.; Whang, Karen; Hunter, John G.

    2003-01-01

    Objective To apply human performance concepts in an attempt to understand the causes of and prevent laparoscopic bile duct injury. Summary Background Data Powerful conceptual advances have been made in understanding the nature and limits of human performance. Applying these findings in high-risk activities, such as commercial aviation, has allowed the work environment to be restructured to substantially reduce human error. Methods The authors analyzed 252 laparoscopic bile duct injuries according to the principles of the cognitive science of visual perception, judgment, and human error. The injury distribution was class I, 7%; class II, 22%; class III, 61%; and class IV, 10%. The data included operative radiographs, clinical records, and 22 videotapes of original operations. Results The primary cause of error in 97% of cases was a visual perceptual illusion. Faults in technical skill were present in only 3% of injuries. Knowledge and judgment errors were contributory but not primary. Sixty-four injuries (25%) were recognized at the index operation; the surgeon identified the problem early enough to limit the injury in only 15 (6%). In class III injuries the common duct, erroneously believed to be the cystic duct, was deliberately cut. This stemmed from an illusion of object form due to a specific uncommon configuration of the structures and the heuristic nature (unconscious assumptions) of human visual perception. The videotapes showed the persuasiveness of the illusion, and many operative reports described the operation as routine. Class II injuries resulted from a dissection too close to the common hepatic duct. Fundamentally an illusion, it was contributed to in some instances by working too deep in the triangle of Calot. Conclusions These data show that errors leading to laparoscopic bile duct injuries stem principally from misperception, not errors of skill, knowledge, or judgment. The misperception was so compelling that in most cases the surgeon did not recognize a problem. Even when irregularities were identified, corrective feedback did not occur, which is characteristic of human thinking under firmly held assumptions. These findings illustrate the complexity of human error in surgery while simultaneously providing insights. They demonstrate that automatically attributing technical complications to behavioral factors that rely on the assumption of control is likely to be wrong. Finally, this study shows that there are only a few points within laparoscopic cholecystectomy where the complication-causing errors occur, which suggests that focused training to heighten vigilance might be able to decrease the incidence of bile duct injury. PMID:12677139

  19. Impact of Tropospheric Aerosol Absorption on Ozone Retrieval from buv Measurements

    NASA Technical Reports Server (NTRS)

    Torres, O.; Bhartia, P. K.

    1998-01-01

    The impact of tropospheric aerosols on the retrieval of column ozone amounts using spaceborne measurements of backscattered ultraviolet radiation is examined. Using radiative transfer calculations, we show that uv-absorbing desert dust may introduce errors as large as 10% in ozone column amount, depending on the aerosol layer height and optical depth. Smaller errors are produced by carbonaceous aerosols that result from biomass burning. Though the error is produced by complex interactions between ozone absorption (both stratospheric and tropospheric), aerosol scattering, and aerosol absorption, a surprisingly simple correction procedure reduces the error to about 1%, for a variety of aerosols and for a wide range of aerosol loading. Comparison of the corrected TOMS data with operational data indicates that though the zonal mean total ozone derived from TOMS are not significantly affected by these errors, localized affects in the tropics can be large enough to seriously affect the studies of tropospheric ozone that are currently undergoing using the TOMS data.

  20. Effective Algorithm for Detection and Correction of the Wave Reconstruction Errors Caused by the Tilt of Reference Wave in Phase-shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying

    2010-04-01

    In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.

  1. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  2. Minimally invasive positioning robot system of femoral neck hollow screw implants based on x-ray error correction

    NASA Astrophysics Data System (ADS)

    Zou, Yunpeng; Xu, Ying; Hu, Lei; Guo, Na; Wang, Lifeng

    2017-01-01

    Aiming the high failure rate, the high radiation quantity and the poor positioning accuracy of femoral neck traditional surgery, this article develops a set of new positioning robot system of femoral neck hollow screw implants based on X-rays error correction, which bases on the study of x-rays perspective principle and the Motion Principle of 6 DOF(degree of freedom) series robot UR(Universal Robots). Compared with Computer Assisted Navigation System, this system owns better positioning accuracy and more simple operation. In addition, without extra Equipment of Visual Tracking, this system can reduce a lot of cost. During the surgery, Doctor can plan the operation path and the pose of mark needle according to the positive and lateral X-rays images of patients. Then they can calculate the pixel ratio according to the ratio of the actual length of mark line and the length on image. After that, they can calculate the amount of exercise of UR Robot according to the relative position between operation path and guide pin and the fixed relationship between guide pin and UR robot. Then, they can control UR to drive the positioning guide pin to the operation path. At this point, check the positioning guide pin and the planning path is coincident, if not, repeat the previous steps, until the positioning guide pin and the planning path coincide which will eventually complete the positioning operation. Moreover, to verify the positioning accuracy, this paper make an errors analysis aiming to thirty cases of the experimental model of bone. The result shows that the motion accuracy of the UR Robot is 0.15mm and the Integral error precision is within 0.8mm. To verify the clinical feasibility of this system, this article analysis on three cases of the clinical experiment. In the whole process of positioning, the X-rays irradiation time is 2-3s, the number of perspective is 3-5 and the whole positioning time is 7-10min. The result shows that this system can complete accurately femoral neck positioning surgery. Meanwhile, it can greatly reduce the X-rays radiation of medical staff and patients. To summarize, it has a significant value in clinical application.

  3. Fourier Transform Methods. Chapter 4

    NASA Technical Reports Server (NTRS)

    Kaplan, Simon G.; Quijada, Manuel A.

    2015-01-01

    This chapter describes the use of Fourier transform spectrometers (FTS) for accurate spectrophotometry over a wide spectral range. After a brief exposition of the basic concepts of FTS operation, we discuss instrument designs and their advantages and disadvantages relative to dispersive spectrometers. We then examine how common sources of error in spectrophotometry manifest themselves when using an FTS and ways to reduce the magnitude of these errors. Examples are given of applications to both basic and derived spectrophotometric quantities. Finally, we give recommendations for choosing the right instrument for a specific application, and how to ensure the accuracy of the measurement results..

  4. Application of auditory signals to the operation of an agricultural vehicle: results of pilot testing.

    PubMed

    Karimi, D; Mondor, T A; Mann, D D

    2008-01-01

    The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.

  5. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  6. Endoscopic endonasal transsphenoidal surgery: implementation of an operative and perioperative checklist.

    PubMed

    Christian, Eisha; Harris, Brianna; Wrobel, Bozena; Zada, Gabriel

    2014-01-01

    Endoscopic endonasal surgery relies heavily on specialized operative instrumentation and optimization of endocrinological and other critical adjunctive intraoperative factors. Several studies and worldwide initiatives have previously established that intraoperative and perioperative surgical checklists can minimize the incidence of and prevent adverse events. The aim of this article was to outline some of the most common considerations in the perioperative and intraoperative preparation for endoscopic endonasal transsphenoidal surgery. The authors implemented and prospectively evaluated a customized checklist at their institution in 25 endoscopic endonasal operations for a variety of sellar and skull base pathological entities. Although no major errors were detected, near misses pertaining primarily to missing components of surgical equipment or instruments were identified in 9 cases (36%). The considerations in the checklist provided in this article can serve as a basic template for further customization by centers performing endoscopic endonasal surgery, where their application may reduce the incidence of adverse or preventable errors associated with surgical treatment of sellar and skull base lesions.

  7. Spitzer Telemetry Processing System

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.

    2013-01-01

    The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.

  8. Reducing waste and errors: piloting lean principles at Intermountain Healthcare.

    PubMed

    Jimmerson, Cindy; Weber, Dorothy; Sobek, Durward K

    2005-05-01

    The Toyota Production System (TPS), based on industrial engineering principles and operational innovations, is used to achieve waste reduction and efficiency while increasing product quality. Several key tools and principles, adapted to health care, have proved effective in improving hospital operations. Value Stream Maps (VSMs), which represent the key people, material, and information flows required to deliver a product or service, distinguish between value-adding and non-value-adding steps. The one-page Problem-Solving A3 Report guides staff through a rigorous and systematic problem-solving process. PILOT PROJECT at INTERMOUNTAIN HEALTHCARE: In a pilot project, participants made many improvements, ranging from simple changes implemented immediately (for example, heart monitor paper not available when a patient presented with a dysrythmia) to larger projects involving patient or information flow issues across multiple departments. Most of the improvements required little or no investment and reduced significant amounts of wasted time for front-line workers. In one unit, turnaround time for pathologist reports from an anatomical pathology lab was reduced from five to two days. TPS principles and tools are applicable to an endless variety of processes and work settings in health care and can be used to address critical challenges such as medical errors, escalating costs, and staffing shortages.

  9. Improved slow-light performance of 10 Gb/s NRZ, PSBT and DPSK signals in fiber broadband SBS.

    PubMed

    Yi, Lilin; Jaouen, Yves; Hu, Weisheng; Su, Yikai; Bigo, Sébastien

    2007-12-10

    We have demonstrated error-free operations of slow-light via stimulated Brillouin scattering (SBS) in optical fiber for 10-Gb/s signals with different modulation formats, including non-return-to-zero (NRZ), phase-shaped binary transmission (PSBT) and differential phase-shiftkeying (DPSK). The SBS gain bandwidth is broadened by using current noise modulation of the pump laser diode. The gain shape is simply controlled by the noise density function. Super-Gaussian noise modulation of the Brillouin pump allows a flat-top and sharp-edge SBS gain spectrum, which can reduce slow-light induced distortion in case of 10-Gb/s NRZ signal. The corresponding maximal delay-time with error-free operation is 35 ps. Then we propose the PSBT format to minimize distortions resulting from SBS filtering effect and dispersion accompanied with slow light because of its high spectral efficiency and strong dispersion tolerance. The sensitivity of the 10-Gb/s PSBT signal is 5.2 dB better than the NRZ case with a same 35-ps delay. The maximal delay of 51 ps with error-free operation has been achieved. Futhermore, the DPSK format is directly demodulated through a Gaussian-shaped SBS gain, which is achieved using Gaussian-noise modulation of the Brillouin pump. The maximal error-free time delay after demodulation of a 10-Gb/s DPSK signal is as high as 81.5 ps, which is the best demonstrated result for 10-Gb/s slow-light.

  10. Reduced Penalties for Disclosures of Certain Clean Air Act Violations

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  11. Methods, systems and apparatus for adjusting duty cycle of pulse width modulated (PWM) waveforms

    DOEpatents

    Gallegos-Lopez, Gabriel; Kinoshita, Michael H; Ransom, Ray M; Perisic, Milun

    2013-05-21

    Embodiments of the present invention relate to methods, systems and apparatus for controlling operation of a multi-phase machine in a vector controlled motor drive system when the multi-phase machine operates in an overmodulation region. The disclosed embodiments provide a mechanism for adjusting a duty cycle of PWM waveforms so that the correct phase voltage command signals are applied at the angle transitions. This can reduce variations/errors in the phase voltage command signals applied to the multi-phase machine so that phase current may be properly regulated thus reducing current/torque oscillation, which can in turn improve machine efficiency and performance, as well as utilization of the DC voltage source.

  12. Robotic aortic surgery.

    PubMed

    Duran, Cassidy; Kashef, Elika; El-Sayed, Hosam F; Bismuth, Jean

    2011-01-01

    Surgical robotics was first utilized to facilitate neurosurgical biopsies in 1985, and it has since found application in orthopedics, urology, gynecology, and cardiothoracic, general, and vascular surgery. Surgical assistance systems provide intelligent, versatile tools that augment the physician's ability to treat patients by eliminating hand tremor and enabling dexterous operation inside the patient's body. Surgical robotics systems have enabled surgeons to treat otherwise untreatable conditions while also reducing morbidity and error rates, shortening operative times, reducing radiation exposure, and improving overall workflow. These capabilities have begun to be realized in two important realms of aortic vascular surgery, namely, flexible robotics for exclusion of complex aortic aneurysms using branched endografts, and robot-assisted laparoscopic aortic surgery for occlusive and aneurysmal disease.

  13. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

    PubMed Central

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks. PMID:28932180

  14. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.

    PubMed

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

  15. Ultrasound transducer function: annual testing is not sufficient.

    PubMed

    Mårtensson, Mattias; Olsson, Mats; Brodin, Lars-Åke

    2010-10-01

    The objective was to follow-up the study 'High incidence of defective ultrasound transducers in use in routine clinical practice' and evaluate if annual testing is good enough to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level. A total of 299 transducers were tested in 13 clinics at five hospitals in the Stockholm area. Approximately 7000-15,000 ultrasound examinations are carried out at these clinics every year. The transducers tested in the study had been tested and classified as fully operational 1 year before and since then been in normal use in the routine clinical practice. The transducers were tested with the Sonora FirstCall Test System. There were 81 (27.1%) defective transducers found; giving a 95% confidence interval ranging from 22.1 to 32.1%. The most common transducer errors were 'delamination' of the ultrasound lens and 'break in the cable' which together constituted 82.7% of all transducer errors found. The highest error rate was found at the radiological clinics with a mean error rate of 36.0%. There was a significant difference in error rate between two observed ways the clinics handled the transducers. There was no significant difference in the error rates of the transducer brands or the transducers models. Annual testing is not sufficient to reduce the incidence of defective ultrasound transducers in routine clinical practice to an acceptable level and it is strongly advisable to create a user routine that minimizes the handling of the transducers.

  16. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1991-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  17. Design and simulation of a 800 Mbit/s data link for magnetic resonance imaging wearables.

    PubMed

    Vogt, Christian; Buthe, Lars; Petti, Luisa; Cantarella, Giuseppe; Munzenrieder, Niko; Daus, Alwin; Troster, Gerhard

    2015-08-01

    This paper presents the optimization of electronic circuitry for operation in the harsh electro magnetic (EM) environment during a magnetic resonance imaging (MRI) scan. As demonstrator, a device small enough to be worn during the scan is optimized. Based on finite element method (FEM) simulations, the induced current densities due to magnetic field changes of 200 T s(-1) were reduced from 1 × 10(10) A m(-2) by one order of magnitude, predicting error-free operation of the 1.8V logic employed. The simulations were validated using a bit error rate test, which showed no bit errors during a MRI scan sequence. Therefore, neither the logic, nor the utilized 800 Mbit s(-1) low voltage differential swing (LVDS) data link of the optimized wearable device were significantly influenced by the EM interference. Next, the influence of ferro-magnetic components on the static magnetic field and consequently the image quality was simulated showing a MRI image loss with approximately 2 cm radius around a commercial integrated circuit of 1×1 cm(2). This was successively validated by a conventional MRI scan.

  18. Dotette: Programmable, high-precision, plug-and-play droplet pipetting.

    PubMed

    Fan, Jinzhen; Men, Yongfan; Hao Tseng, Kuo; Ding, Yi; Ding, Yunfeng; Villarreal, Fernando; Tan, Cheemeng; Li, Baoqing; Pan, Tingrui

    2018-05-01

    Manual micropipettes are the most heavily used liquid handling devices in biological and chemical laboratories; however, they suffer from low precision for volumes under 1  μ l and inevitable human errors. For a manual device, the human errors introduced pose potential risks of failed experiments, inaccurate results, and financial costs. Meanwhile, low precision under 1  μ l can cause severe quantification errors and high heterogeneity of outcomes, becoming a bottleneck of reaction miniaturization for quantitative research in biochemical labs. Here, we report Dotette, a programmable, plug-and-play microfluidic pipetting device based on nanoliter liquid printing. With automated control, protocols designed on computers can be directly downloaded into Dotette, enabling programmable operation processes. Utilizing continuous nanoliter droplet dispensing, the precision of the volume control has been successfully improved from traditional 20%-50% to less than 5% in the range of 100 nl to 1000 nl. Such a highly automated, plug-and-play add-on to existing pipetting devices not only improves precise quantification in low-volume liquid handling and reduces chemical consumptions but also facilitates and automates a variety of biochemical and biological operations.

  19. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1990-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  20. ATC operational error analysis.

    DOT National Transportation Integrated Search

    1972-01-01

    The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...

  1. Metrological Software Test for Simulating the Method of Determining the Thermocouple Error in Situ During Operation

    NASA Astrophysics Data System (ADS)

    Chen, Jingliang; Su, Jun; Kochan, Orest; Levkiv, Mariana

    2018-04-01

    The simplified metrological software test (MST) for modeling the method of determining the thermocouple (TC) error in situ during operation is considered in the paper. The interaction between the proposed MST and a temperature measuring system is also reflected in order to study the error of determining the TC error in situ during operation. The modelling studies of the random error influence of the temperature measuring system, as well as interference magnitude (both the common and normal mode noises) on the error of determining the TC error in situ during operation using the proposed MST, have been carried out. The noise and interference of the order of 5-6 μV cause the error of about 0.2-0.3°C. It is shown that high noise immunity is essential for accurate temperature measurements using TCs.

  2. Single Event Effect microchip testing at the Texas A&M University Cyclotron Institute

    NASA Astrophysics Data System (ADS)

    Clark, Henry; Yennello, Sherry; Texas A&M University-Cyclotron Institute Team

    2015-10-01

    A Single Event Effect (SEE) is caused by a single, energetic particle that deposits a sufficient amount of charge in a device as it transverses it and upsets its normal operation. Soft errors are non-destructive and normally appear as transient pulses in logic or support circuitry, or as bit flips in memory cells or registers. Hard errors usually result in a high operating current, above device specifications, and must be cleared by a power reset. Burnout errors are so destructive that the device becomes operationally dead. Spacecraft designers must be concerned with the causes of SEE's from protons and heavy ions since commercial devices are typically chosen reduce the parameters of power, weight, volume and cost but have increased functionality, which in turn are typically vulnerable to SEE. As a result all mission-critical devices must be tested. The TAMU K500 superconducting cyclotron has provided beams for space radiation testing since 1994. Starting at just 100 hours/year at inception, the demand has grown to 3000 hours/year. In recent years, most beam time has been for US defense system testing. Nearly 15% has been provided for foreign agencies from Europe and Asia. An overview of the testing facility and future plans will be presented.

  3. Operational quality control of daily precipitation using spatio-climatological consistency testing

    NASA Astrophysics Data System (ADS)

    Scherrer, S. C.; Croci-Maspoli, M.; van Geijtenbeek, D.; Naguel, C.; Appenzeller, C.

    2010-09-01

    Quality control (QC) of meteorological data is of utmost importance for climate related decisions. The search for an effective automated QC of precipitation data has proven difficult and many weather services still use mainly manual inspection of daily precipitation including MeteoSwiss. However, man power limitations force many weather services to move towards less labour intensive and more automated QC with the challenge to keeping data quality high. In the last decade, several approaches have been presented to objectify daily precipitation QC. Here we present a spatio-climatological approach that will be implemented operationally at MeteoSwiss. It combines the information from the event based spatial distribution of everyday's precipitation field and the historical information of the interpolation error using different precipitation intensity intervals. Expert judgement shows that the system is able to detect potential outliers very well (hardly any missed errors) without creating too many false alarms that need human inspection. 50-80% of all flagged values have been classified as real errors by the data editor. This is much better than the roughly 15-20% using standard spatial regression tests. Very helpful in the QC process is the automatic redistribution of accumulated several day sums. Manual inspection in operations can be reduced and the QC of precipitation objectified substantially.

  4. Estimations of ABL fluxes and other turbulence parameters from Doppler lidar data

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1989-01-01

    Techniques for extraction boundary layer parameters from measurements of a short-pulse CO2 Doppler lidar are described. The measurements are those collected during the First International Satellites Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE). By continuously operating the lidar for about an hour, stable statistics of the radial velocities can be extracted. Assuming that the turbulence is horizontally homogeneous, the mean wind, its standard deviations, and the momentum fluxes were estimated. Spectral analysis of the radial velocities is also performed from which, by examining the amplitude of the power spectrum at the inertial range, the kinetic energy dissipation was deduced. Finally, using the statistical form of the Navier-Stokes equations, the surface heat flux is derived as the residual balance between the vertical gradient of the third moment of the vertical velocity and the kinetic energy dissipation. Combining many measurements would normally reduce the error provided that, it is unbiased and uncorrelated. The nature of some of the algorithms however, is such that, biased and correlated errors may be generated even though the raw measurements are not. Data processing procedures were developed that eliminate bias and minimize error correlation. Once bias and error correlations are accounted for, the large sample size is shown to reduce the errors substantially. The principal features of the derived turbulence statistics for two case studied are presented.

  5. Operation room tool handling and miscommunication scenarios: an object-process methodology conceptual model.

    PubMed

    Wachs, Juan P; Frenkel, Boaz; Dori, Dov

    2014-11-01

    Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at various levels of detail, each level is depicted in a separate diagram, and all the diagrams are "aware" of each other as part of the whole model. Providing ontology of verbal and non-verbal modalities of communication in the OR, the resulting conceptual model is a solid basis for analyzing and understanding the source of the large variety of errors occurring in the course of an operation, providing an opportunity to decrease the quantity and severity of mistakes related to the use and misuse of surgical instrumentations. Since the model is event driven, rather than person driven, the focus is on the factors causing the errors, rather than the specific person. This approach advocates searching for technological solutions to alleviate tool-related errors rather than finger-pointing. Concretely, the model was validated through a structured questionnaire and it was found that surgeons agreed that the conceptual model was flexible (3.8 of 5, std=0.69), accurate, and it generalizable (3.7 of 5, std=0.37 and 3.7 of 5, std=0.85, respectively). The detailed conceptual model of the tools handling subsystem of the operation performed in an OR focuses on the details of the communication and the interactions taking place between the surgeon and the surgical technician during an operation, with the objective of pinpointing the exact circumstances in which errors can happen. Exact and concise specification of the communication events in general and the surgical instrument requests in particular is a prerequisite for a methodical analysis of the various modes of errors and the circumstances under which they occur. This has significant potential value in both reduction in tool-handling-related errors during an operation and providing a solid formal basis for designing a cybernetic agent which can replace a surgical technician in routine tool handling activities during an operation, freeing the technician to focus on quality assurance, monitoring and control of the cybernetic agent activities. This is a critical step in designing the next generation of cybernetic OR assistants. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Eliminating time dispersion from seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  7. How smart is your BEOL? productivity improvement through intelligent automation

    NASA Astrophysics Data System (ADS)

    Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony

    2017-07-01

    The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.

  8. Analysis and correction of linear optics errors, and operational improvements in the Indus-2 storage ring

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2017-08-01

    Estimation and correction of the optics errors in an operational storage ring is always vital to achieve the design performance. To achieve this task, the most suitable and widely used technique, called linear optics from closed orbit (LOCO) is used in almost all storage ring based synchrotron radiation sources. In this technique, based on the response matrix fit, errors in the quadrupole strengths, beam position monitor (BPM) gains, orbit corrector calibration factors etc. can be obtained. For correction of the optics, suitable changes in the quadrupole strengths can be applied through the driving currents of the quadrupole power supplies to achieve the desired optics. The LOCO code has been used at the Indus-2 storage ring for the first time. The estimation of linear beam optics errors and their correction to minimize the distortion of linear beam dynamical parameters by using the installed number of quadrupole power supplies is discussed. After the optics correction, the performance of the storage ring is improved in terms of better beam injection/accumulation, reduced beam loss during energy ramping, and improvement in beam lifetime. It is also useful in controlling the leakage in the orbit bump required for machine studies or for commissioning of new beamlines.

  9. Cost-effectiveness of the stream-gaging program in North Carolina

    USGS Publications Warehouse

    Mason, R.R.; Jackson, N.M.

    1985-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.

  10. Motion-based nonuniformity correction in DoFP polarimeters

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Tyo, J. Scott; Ratliff, Bradley M.

    2007-09-01

    Division of Focal Plane polarimeters (DoFP) operate by integrating an array of micropolarizer elements with a focal plane array. These devices have been investigated for over a decade, and example systems have been built in all regions of the optical spectrum. DoFP devices have the distinct advantage that they are mechanically rugged, inherently temporally synchronized, and optically aligned. They have the concomitant disadvantage that each pixel in the FPA has a different instantaneous field of view (IFOV), meaning that the polarization component measurements that go into estimating the Stokes vector across the image come from four different points in the field. In addition to IFOV errors, microgrid camera systems operating in the LWIR have the additional problem that FPA nonuniformity (NU) noise can be quite severe. The spatial differencing nature of a DoFP system exacerbates the residual NU noise that is remaining after calibration, and is often the largest source of false polarization signatures away from regions where IFOV error dominates. We have recently presented a scene based algorithm that uses frame-to-frame motion to compensate for NU noise in unpolarized IR imagers. In this paper, we have extended that algorithm so that it can be used to compensate for NU noise on a DoFP polarimeter. Furthermore, the additional information provided by the scene motion can be used to significantly reduce the IFOV error. We have found a reduction of IFOV error by a factor of 10 if the scene motion is known exactly. Performance is reduced when the motion must be estimated from the scene, but still shows a marked improvement over static DoFP images.

  11. Cross-Coupled Control for All-Terrain Rovers

    PubMed Central

    Reina, Giulio

    2013-01-01

    Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors. PMID:23299625

  12. The use of snowcovered area in runoff forecasts

    NASA Technical Reports Server (NTRS)

    Rango, A.; Hannaford, J. F.; Hall, R. L.; Rosenzweig, M.; Brown, A. J.

    1977-01-01

    Long-term snowcovered area data from aircraft and satellite observations have proven useful in reducing seasonal runoff forecast error on the Kern river watershed. Similar use of snowcovered area on the Kings river watershed produced results that were about equivalent to methods based solely on conventional data. Snowcovered area will be most effective in reducing forecast procedural error on watersheds with: (1) a substantial amount of area within a limited elevation range; (2) an erratic precipitation and/or snowpack accumulation pattern not strongly related to elevation; and (3) poor coverage by precipitation stations or snow courses restricting adequate indexing of water supply conditions. When satellite data acquisition and delivery problems are resolved, the derived snowcover information should provide a means for enhancing operational streamflow forecasts for areas that depend primarily on snowmelt for their water supply.

  13. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  14. Sustainability of protocolized handover of pediatric cardiac surgery patients to the intensive care unit.

    PubMed

    Chenault, Kristin; Moga, Michael-Alice; Shin, Minah; Petersen, Emily; Backer, Carl; De Oliveira, Gildasio S; Suresh, Santhanam

    2016-05-01

    Transfer of patient care among clinicians (handovers) is a common source of medical errors. While the immediate efficacy of these initiatives is well documented, sustainability of practice changes that results in better processes of care is largely understudied. The objective of the current investigation was to evaluate the sustainability of a protocolized handover process in pediatric patients from the operating room after cardiac surgery to the intensive care unit. This was a prospective study with direct observation assessment of handover performance conducted in the cardiac ICU (CICU) of a free-standing, tertiary care children's hospital in the United States. Patient transitions from the operating room to the CICU, including the verbal handoff, were directly observed by a single independent observer in all phases of the study. A checklist of key elements identified errors classified as: (1) technical, (2) information omissions, and (3) realized errors. Total number of errors was compared across the different times of the study (preintervention, postintervention, and the current sustainability phase). A total of 119 handovers were studied: 41 preintervention, 38 postintervention, and 40 in the current sustainability phase. The median [Interquartile range (IQR)] number of technical errors was significantly reduced in the sustainability phase compared to the preintervention and postintervention phase, 2 (1-3), 6 (5-7), and 2.5 (2-4), respectively P = 0.0001. Similarly, the median (IQR) number of verbal information omissions was also significantly reduced in the sustainability phase compared to the preintervention and postintervention phases, 1 (1-1), 4 (3-5) and 2 (1-3), respectively. We demonstrate sustainability of an improved handover process using a checklist in children being transferred to the intensive care unit after cardiac surgery. Standardized handover processes can be a sustainable strategy to improve patient safety after pediatric cardiac surgery. © 2016 John Wiley & Sons Ltd.

  15. Interface evaluation for soft robotic manipulators

    NASA Astrophysics Data System (ADS)

    Moore, Kristin S.; Rodes, William M.; Csencsits, Matthew A.; Kwoka, Martha J.; Gomer, Joshua A.; Pagano, Christopher C.

    2006-05-01

    The results of two usability experiments evaluating an interface for the operation of OctArm, a biologically inspired robotic arm modeled after an octopus tentacle, are reported. Due to the many degrees-of-freedom (DOF) for the operator to control, such 'continuum' robotic limbs provide unique challenges for human operators because they do not map intuitively. Two modes have been developed to control the arm and reduce the DOF under the explicit direction of the operator. In coupled velocity (CV) mode, a joystick controls changes in arm curvature. In end-effector (EE) mode, a joystick controls the arm by moving the position of an endpoint along a straight line. In Experiment 1, participants used the two modes to grasp objects placed at different locations in a virtual reality modeling language (VRML). Objective measures of performance and subjective preferences were recorded. Results revealed lower grasp times and a subjective preference for the CV mode. Recommendations for improving the interface included providing additional feedback and implementation of an error recovery function. In Experiment 2, only the CV mode was tested with improved training of participants and several changes to the interface. The error recovery function was implemented, allowing participants to reverse through previously attained positions. The mean time to complete the trials in the second usability test was reduced by more than 4 minutes compared with the first usability test, confirming the interface changes improved performance. The results of these tests will be incorporated into future versions of the arm and improve future usability tests.

  16. Patterns of technical error among surgical malpractice claims: an analysis of strategies to prevent injury to surgical patients.

    PubMed

    Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A

    2007-11-01

    To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.

  17. Ionospheric refraction effects on TOPEX orbit determination accuracy using the Tracking and Data Relay Satellite System (TDRSS)

    NASA Technical Reports Server (NTRS)

    Radomski, M. S.; Doll, C. E.

    1991-01-01

    This investigation concerns the effects on Ocean Topography Experiment (TOPEX) spacecraft operational orbit determination of ionospheric refraction error affecting tracking measurements from the Tracking and Data Relay Satellite System (TDRSS). Although tracking error from this source is mitigated by the high frequencies (K-band) used for the space-to-ground links and by the high altitudes for the space-to-space links, these effects are of concern for the relatively high-altitude (1334 kilometers) TOPEX mission. This concern is due to the accuracy required for operational orbit-determination by the Goddard Space Flight Center (GSFC) and to the expectation that solar activity will still be relatively high at TOPEX launch in mid-1992. The ionospheric refraction error on S-band space-to-space links was calculated by a prototype observation-correction algorithm using the Bent model of ionosphere electron densities implemented in the context of the Goddard Trajectory Determination System (GTDS). Orbit determination error was evaluated by comparing parallel TOPEX orbit solutions, applying and omitting the correction, using the same simulated TDRSS tracking observations. The tracking scenarios simulated those planned for the observation phase of the TOPEX mission, with a preponderance of one-way return-link Doppler measurements. The results of the analysis showed most TOPEX operational accuracy requirements to be little affected by space-to-space ionospheric error. The determination of along-track velocity changes after ground-track adjustment maneuvers, however, is significantly affected when compared with the stringent 0.1-millimeter-per-second accuracy requirements, assuming uncoupled premaneuver and postmaneuver orbit determination. Space-to-space ionospheric refraction on the 24-hour postmaneuver arc alone causes 0.2 millimeter-per-second errors in along-track delta-v determination using uncoupled solutions. Coupling the premaneuver and postmaneuver solutions, however, appears likely to reduce this figure substantially. Plans and recommendations for response to these findings are presented.

  18. Improvement in Patient Transfer Process From the Operating Room to the PICU Using a Lean and Six Sigma-Based Quality Improvement Project.

    PubMed

    Gleich, Stephen J; Nemergut, Michael E; Stans, Anthony A; Haile, Dawit T; Feigal, Scott A; Heinrich, Angela L; Bosley, Christopher L; Tripathi, Sandeep

    2016-08-01

    Ineffective and inefficient patient transfer processes can increase the chance of medical errors. Improvements in such processes are high-priority local institutional and national patient safety goals. At our institution, nonintubated postoperative pediatric patients are first admitted to the postanesthesia care unit before transfer to the PICU. This quality improvement project was designed to improve the patient transfer process from the operating room (OR) to the PICU. After direct observation of the baseline process, we introduced a structured, direct OR-PICU transfer process for orthopedic spinal fusion patients. We performed value stream mapping of the process to determine error-prone and inefficient areas. We evaluated primary outcome measures of handoff error reduction and the overall efficiency of patient transfer process time. Staff satisfaction was evaluated as a counterbalance measure. With the introduction of the new direct OR-PICU patient transfer process, the handoff communication error rate improved from 1.9 to 0.3 errors per patient handoff (P = .002). Inefficiency (patient wait time and non-value-creating activity) was reduced from 90 to 32 minutes. Handoff content was improved with fewer information omissions (P < .001). Staff satisfaction significantly improved among nearly all PICU providers. By using quality improvement methodology to design and implement a new direct OR-PICU transfer process with a structured multidisciplinary verbal handoff, we achieved sustained improvements in patient safety and efficiency. Handoff communication was enhanced, with fewer errors and content omissions. The new process improved efficiency, with high staff satisfaction. Copyright © 2016 by the American Academy of Pediatrics.

  19. Recommendations for reducing ambiguity in written procedures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.

    Previous studies in the nuclear weapons complex have shown that ambiguous work instructions (WIs) and operating procedures (OPs) can lead to human error, which is a major cause for concern. This report outlines some of the sources of ambiguity in written English and describes three recommendations for reducing ambiguity in WIs and OPs. The recommendations are based on commonly used research techniques in the fields of linguistics and cognitive psychology. The first recommendation is to gather empirical data that can be used to improve the recommended word lists that are provided to technical writers. The second recommendation is to havemore » a review in which new WIs and OPs and checked for ambiguities and clarity. The third recommendation is to use self-paced reading time studies to identify any remaining ambiguities before the new WIs and OPs are put into use. If these three steps are followed for new WIs and OPs, the likelihood of human errors related to ambiguity could be greatly reduced.« less

  20. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  1. Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance

    NASA Astrophysics Data System (ADS)

    Speck, Richard P.; Herz, Norman E., Jr.

    2000-06-01

    Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.

  2. Benefit of Complete State Monitoring For GPS Realtime Applications With Geo++ Gnsmart

    NASA Astrophysics Data System (ADS)

    Wübbena, G.; Schmitz, M.; Bagge, A.

    Today, the demand for precise positioning at the cm-level in realtime is worldwide growing. An indication for this is the number of operational RTK network installa- tions, which use permanent reference station networks to derive corrections for dis- tance dependent GPS errors and to supply corrections to RTK users in realtime. Gen- erally, the inter-station distances in RTK networks are selected at several tens of km in range and operational installations cover areas of up to 50000 km x km. However, the separation of the permanent reference stations can be increased to sev- eral hundred km, while a correct modeling of all error components is applied. Such networks can be termed as sparse RTK networks, which cover larger areas with a reduced number of stations. The undifferenced GPS observable is best suited for this task estimating the complete state of a permanent GPS network in a dynamic recursive Kalman filter. A rigorous adjustment of all simultaneous reference station data is re- quired. The sparse network design essentially supports the state estimation through its large spatial extension. The benefit of the approach and its state modeling of all GPS error components is a successful ambiguity resolution in realtime over long distances. The above concepts are implemented in the operational GNSMART (GNSS State Monitoring and Representation Technique) software of Geo++. It performs a state monitoring of all error components at the mm-level, because for RTK networks this accuracy is required to sufficiently represent the distance dependent errors for kine- matic applications. One key issue of the modeling is the estimation of clocks and hard- ware delays in the undifferenced approach. This pre-requisite subsequently allows for the precise separation and modeling of all other error components. Generally most of the estimated parameters are considered as nuisance parameters with respect to pure positioning tasks. As the complete state vector of GPS errors is available in a GPS realtime network, additional information besides position can be derived e.g. regional precise satellite clocks, orbits, total ionospheric electron content, tropospheric water vapor distribution, and also dynamic reference station movements. The models of GNSMART are designed to work with regional, continental or even global data. Results from GNSMART realtime networks with inter-station distances of several hundred km are presented to demonstrate the benefits of the operational implemented concepts.

  3. [Risk and risk management in aviation].

    PubMed

    Müller, Manfred

    2004-10-01

    RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.

  4. Sensitivity of chemistry-transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.

    2016-05-01

    Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  5. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  6. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  7. Digital signal processor and processing method for GPS receivers

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1989-01-01

    A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.

  8. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Flight tests with a data link used for air traffic control information exchange

    NASA Technical Reports Server (NTRS)

    Knox, Charles E.; Scanlon, Charles H.

    1991-01-01

    Previous studies showed that air traffic control (ATC) message exchange with a data link offers the potential benefits of increased airspace system safety and efficiency. To accomplish these benefits, data link can be used to reduce communication errors and relieve overloaded ATC voice radio frequencies, which hamper efficient message exchange during peak traffic periods. Flight tests with commercial airline pilots as test subjects were conducted in the NASA Transport Systems Research Vehicle Boeing 737 airplane to contrast flight operations that used current voice communications with flight operations that used data link to transmit both strategic and tactical ATC clearances during a typical commercial airflight from takeoff to landing. The results of these tests that used data link as the primary communication source with ATC showed flight crew acceptance, a perceived reduction in crew work load, and a reduction in crew communication errors.

  10. Effects of divided attention and operating room noise on perception of pulse oximeter pitch changes: a laboratory study.

    PubMed

    Stevenson, Ryan A; Schlesinger, Joseph J; Wallace, Mark T

    2013-02-01

    Anesthesiology requires performing visually oriented procedures while monitoring auditory information about a patient's vital signs. A concern in operating room environments is the amount of competing information and the effects that divided attention has on patient monitoring, such as detecting auditory changes in arterial oxygen saturation via pulse oximetry. The authors measured the impact of visual attentional load and auditory background noise on the ability of anesthesia residents to monitor the pulse oximeter auditory display in a laboratory setting. Accuracies and response times were recorded reflecting anesthesiologists' abilities to detect changes in oxygen saturation across three levels of visual attention in quiet and with noise. Results show that visual attentional load substantially affects the ability to detect changes in oxygen saturation concentrations conveyed by auditory cues signaling 99 and 98% saturation. These effects are compounded by auditory noise, up to a 17% decline in performance. These deficits are seen in the ability to accurately detect a change in oxygen saturation and in speed of response. Most anesthesia accidents are initiated by small errors that cascade into serious events. Lack of monitor vigilance and inattention are two of the more commonly cited factors. Reducing such errors is thus a priority for improving patient safety. Specifically, efforts to reduce distractors and decrease background noise should be considered during induction and emergence, periods of especially high risk, when anesthesiologists has to attend to many tasks and are thus susceptible to error.

  11. Back-Propagation Operation for Analog Neural Network Hardware with Synapse Components Having Hysteresis Characteristics

    PubMed Central

    Ueda, Michihito; Nishitani, Yu; Kaneko, Yukihiro; Omote, Atsushi

    2014-01-01

    To realize an analog artificial neural network hardware, the circuit element for synapse function is important because the number of synapse elements is much larger than that of neuron elements. One of the candidates for this synapse element is a ferroelectric memristor. This device functions as a voltage controllable variable resistor, which can be applied to a synapse weight. However, its conductance shows hysteresis characteristics and dispersion to the input voltage. Therefore, the conductance values vary according to the history of the height and the width of the applied pulse voltage. Due to the difficulty of controlling the accurate conductance, it is not easy to apply the back-propagation learning algorithm to the neural network hardware having memristor synapses. To solve this problem, we proposed and simulated a learning operation procedure as follows. Employing a weight perturbation technique, we derived the error change. When the error reduced, the next pulse voltage was updated according to the back-propagation learning algorithm. If the error increased the amplitude of the next voltage pulse was set in such way as to cause similar memristor conductance but in the opposite voltage scanning direction. By this operation, we could eliminate the hysteresis and confirmed that the simulation of the learning operation converged. We also adopted conductance dispersion numerically in the simulation. We examined the probability that the error decreased to a designated value within a predetermined loop number. The ferroelectric has the characteristics that the magnitude of polarization does not become smaller when voltages having the same polarity are applied. These characteristics greatly improved the probability even if the learning rate was small, if the magnitude of the dispersion is adequate. Because the dispersion of analog circuit elements is inevitable, this learning operation procedure is useful for analog neural network hardware. PMID:25393715

  12. Flight deck crew coordination indices of workload and situation awareness in terminal operations

    NASA Astrophysics Data System (ADS)

    Ellis, Kyle Kent Edward

    Crew coordination in the context of aviation is a specifically choreographed set of tasks performed by each pilot, defined for each phase of flight. Based on the constructs of effective Crew Resource Management and SOPs for each phase of flight, a shared understanding of crew workload and task responsibility is considered representative of well-coordinated crews. Nominal behavior is therefore defined by SOPs and CRM theory, detectable through pilot eye-scan. This research investigates the relationship between the eye-scan exhibited by each pilot and the level of coordination between crewmembers. Crew coordination was evaluated based on each pilot's understanding of the other crewmember's workload. By contrasting each pilot's workload-understanding, crew coordination was measured as the summed absolute difference of each pilot's understanding of the other crewmember's reported workload, resulting in a crew coordination index. The crew coordination index rates crew coordination on a scale ranging across Excellent, Good, Fair and Poor. Eye-scan behavior metrics were found to reliably identify a reduction in crew coordination. Additionally, crew coordination was successfully characterized by eye-scan behavior data using machine learning classification methods. Identifying eye-scan behaviors on the flight deck indicative of reduced crew coordination can be used to inform training programs and design enhanced avionics that improve the overall coordination between the crewmembers and the flight deck interface. Additionally, characterization of crew coordination can be used to develop methods to increase shared situation awareness and crew coordination to reduce operational and flight technical errors. Ultimately, the ability to reduce operational and flight technical errors made by pilot crews improves the safety of aviation.

  13. Errors in fluid balance with pump control of continuous hemodialysis.

    PubMed

    Roberts, M; Winney, R J

    1992-02-01

    The use of pumps both proximal and distal to the dialyzer during continuous hemodialysis provides control of dialysate and ultrafiltration flow rates, thereby reducing nursing time. However, we had noted unexpected severe extracellular fluid depletion suggesting that errors in pump delivery may be responsible. We measured in vitro the operation of various pumps under conditions similar to continuous hemodialysis. Fluid delivery of peristaltic and roller pumps varied with how the tubing set was inserted in the pump. Piston and peristaltic pumps with dedicated pump segments were more accurate. Pumps should be calibrated and tested under conditions simulating continuous hemodialysis prior to in vivo use.

  14. Acoustic sensor for real-time control for the inductive heating process

    DOEpatents

    Kelley, John Bruce; Lu, Wei-Yang; Zutavern, Fred J.

    2003-09-30

    Disclosed is a system and method for providing closed-loop control of the heating of a workpiece by an induction heating machine, including generating an acoustic wave in the workpiece with a pulsed laser; optically measuring displacements of the surface of the workpiece in response to the acoustic wave; calculating a sub-surface material property by analyzing the measured surface displacements; creating an error signal by comparing an attribute of the calculated sub-surface material properties with a desired attribute; and reducing the error signal below an acceptable limit by adjusting, in real-time, as often as necessary, the operation of the inductive heating machine.

  15. The next organizational challenge: finding and addressing diagnostic error.

    PubMed

    Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H

    2014-03-01

    Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.

  16. Human Factors Evaluation of Conflict Detection Tool for Terminal Area

    NASA Technical Reports Server (NTRS)

    Verma, Savita Arora; Tang, Huabin; Ballinger, Deborah; Chinn, Fay Cherie; Kozon, Thomas E.

    2013-01-01

    A conflict detection and resolution tool, Terminal-area Tactical Separation-Assured Flight Environment (T-TSAFE), is being developed to improve the timeliness and accuracy of alerts and reduce the false alert rate observed with the currently deployed technology. The legacy system in use today, Conflict Alert, relies primarily on a dead reckoning algorithm, whereas T-TSAFE uses intent information to augment dead reckoning. In previous experiments, T-TSAFE was found to reduce the rate of false alerts and increase time between the alert to the controller and a loss of separation over the legacy system. In the present study, T-TSAFE was tested under two meteorological conditions, 1) all aircraft operated under instrument flight regimen, and 2) some aircraft operated under mixed operating conditions. The tool was used to visually alert controllers to predicted Losses of separation throughout the terminal airspace, and show compression errors, on final approach. The performance of T-TSAFE on final approach was compared with Automated Terminal Proximity Alert (ATPA), a tool recently deployed by the FAA. Results show that controllers did not report differences in workload or situational awareness between the T-TSAFE and ATPA cones but did prefer T-TSAFE features over ATPA functionality. T-TSAFE will provide one tool that shows alerts in the data blocks and compression errors via cones on the final approach, implementing all tactical conflict detection and alerting via one tool in TRACON airspace.

  17. EPDM Based Double Slope Triangular Enclosure Solar Collector: A Novel Approach

    PubMed Central

    Qureshi, Shafiq R.; Khan, Waqar A.

    2014-01-01

    Solar heating is one of the important utilities of solar energy both in domestic and industrial sectors. Evacuated tube heaters are a commonly used technology for domestic water heating. However, increasing cost of copper and nickel has resulted in huge initial cost for these types of heaters. Utilizing solar energy more economically for domestic use requires new concept which has low initial and operating costs together with ease of maintainability. As domestic heating requires only nominal heating temperature to the range of 60–90°C, therefore replacing nickel coated copper pipes with any cheap alternate can drastically reduce the cost of solar heater. We have proposed a new concept which utilizes double slope triangular chamber with EPDM based synthetic rubber pipes. This has reduced the initial and operating costs substantially. A detailed analytical study was carried out to design a novel solar heater. On the basis of analytical design, a prototype was manufactured. Results obtained from the experiments were found to be in good agreement with the analytical study. A maximum error of 10% was recorded at noon. However, results show that error is less than 5% in early and late hours. PMID:24688407

  18. EPDM based double slope triangular enclosure solar collector: a novel approach.

    PubMed

    Qureshi, Shafiq R; Khan, Waqar A; Sarwar, Waqas

    2014-01-01

    Solar heating is one of the important utilities of solar energy both in domestic and industrial sectors. Evacuated tube heaters are a commonly used technology for domestic water heating. However, increasing cost of copper and nickel has resulted in huge initial cost for these types of heaters. Utilizing solar energy more economically for domestic use requires new concept which has low initial and operating costs together with ease of maintainability. As domestic heating requires only nominal heating temperature to the range of 60-90 °C, therefore replacing nickel coated copper pipes with any cheap alternate can drastically reduce the cost of solar heater. We have proposed a new concept which utilizes double slope triangular chamber with EPDM based synthetic rubber pipes. This has reduced the initial and operating costs substantially. A detailed analytical study was carried out to design a novel solar heater. On the basis of analytical design, a prototype was manufactured. Results obtained from the experiments were found to be in good agreement with the analytical study. A maximum error of 10% was recorded at noon. However, results show that error is less than 5% in early and late hours.

  19. A combined spectral and object-based approach to transparent cloud removal in an operational setting for Landsat ETM+

    NASA Astrophysics Data System (ADS)

    Watmough, Gary R.; Atkinson, Peter M.; Hutton, Craig W.

    2011-04-01

    The automated cloud cover assessment (ACCA) algorithm has provided automated estimates of cloud cover for the Landsat ETM+ mission since 2001. However, due to the lack of a band around 1.375 μm, cloud edges and transparent clouds such as cirrus cannot be detected. Use of Landsat ETM+ imagery for terrestrial land analysis is further hampered by the relatively long revisit period due to a nadir only viewing sensor. In this study, the ACCA threshold parameters were altered to minimise omission errors in the cloud masks. Object-based analysis was used to reduce the commission errors from the extended cloud filters. The method resulted in the removal of optically thin cirrus cloud and cloud edges which are often missed by other methods in sub-tropical areas. Although not fully automated, the principles of the method developed here provide an opportunity for using otherwise sub-optimal or completely unusable Landsat ETM+ imagery for operational applications. Where specific images are required for particular research goals the method can be used to remove cloud and transparent cloud helping to reduce bias in subsequent land cover classifications.

  20. A virtual reality-based method of decreasing transmission time of visual feedback for a tele-operative robotic catheter operating system.

    PubMed

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-03-01

    An Internet-based tele-operative robotic catheter operating system was designed for vascular interventional surgery, to afford unskilled surgeons the opportunity to learn basic catheter/guidewire skills, while allowing experienced physicians to perform surgeries cooperatively. Remote surgical procedures, limited by variable transmission times for visual feedback, have been associated with deterioration in operability and vascular wall damage during surgery. At the patient's location, the catheter shape/position was detected in real time and converted into three-dimensional coordinates in a world coordinate system. At the operation location, the catheter shape was reconstructed in a virtual-reality environment, based on the coordinates received. The data volume reduction significantly reduced visual feedback transmission times. Remote transmission experiments, conducted over inter-country distances, demonstrated the improved performance of the proposed prototype. The maximum error for the catheter shape reconstruction was 0.93 mm and the transmission time was reduced considerably. The results were positive and demonstrate the feasibility of remote surgery using conventional network infrastructures. Copyright © 2015 John Wiley & Sons, Ltd.

  1. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  2. Innovations in Medication Preparation Safety and Wastage Reduction: Use of a Workflow Management System in a Pediatric Hospital.

    PubMed

    Davis, Stephen Jerome; Hurtado, Josephine; Nguyen, Rosemary; Huynh, Tran; Lindon, Ivan; Hudnall, Cedric; Bork, Sara

    2017-01-01

    Background: USP <797> regulatory requirements have mandated that pharmacies improve aseptic techniques and cleanliness of the medication preparation areas. In addition, the Institute for Safe Medication Practices (ISMP) recommends that technology and automation be used as much as possible for preparing and verifying compounded sterile products. Objective: To determine the benefits associated with the implementation of the workflow management system, such as reducing medication preparation and delivery errors, reducing quantity and frequency of medication errors, avoiding costs, and enhancing the organization's decision to move toward positive patient identification (PPID). Methods: At Texas Children's Hospital, data were collected and analyzed from January 2014 through August 2014 in the pharmacy areas in which the workflow management system would be implemented. Data were excluded for September 2014 during the workflow management system oral liquid implementation phase. Data were collected and analyzed from October 2014 through June 2015 to determine whether the implementation of the workflow management system reduced the quantity and frequency of reported medication errors. Data collected and analyzed during the study period included the quantity of doses prepared, number of incorrect medication scans, number of doses discontinued from the workflow management system queue, and the number of doses rejected. Data were collected and analyzed to identify patterns of incorrect medication scans, to determine reasons for rejected medication doses, and to determine the reduction in wasted medications. Results: During the 17-month study period, the pharmacy department dispensed 1,506,220 oral liquid and injectable medication doses. From October 2014 through June 2015, the pharmacy department dispensed 826,220 medication doses that were prepared and checked via the workflow management system. Of those 826,220 medication doses, there were 16 reported incorrect volume errors. The error rate after the implementation of the workflow management system averaged 8.4%, which was a 1.6% reduction. After the implementation of the workflow management system, the average number of reported oral liquid medication and injectable medication errors decreased to 0.4 and 0.2 times per week, respectively. Conclusion: The organization was able to achieve its purpose and goal of improving the provision of quality pharmacy care through optimal medication use and safety by reducing medication preparation errors. Error rates decreased and the workflow processes were streamlined, which has led to seamless operations within the pharmacy department. There has been significant cost avoidance and waste reduction and enhanced interdepartmental satisfaction due to the reduction of reported medication errors.

  3. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  4. Frozen section analysis of margins for head and neck tumor resections: reduction of sampling errors with a third histologic level.

    PubMed

    Olson, Stephen M; Hussaini, Mohammad; Lewis, James S

    2011-05-01

    Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.

  5. Heterogeneity of spontaneous DNA replication errors in single isogenic Escherichia coli cells

    PubMed Central

    2018-01-01

    Despite extensive knowledge of the molecular mechanisms that control mutagenesis, it is not known how spontaneous mutations are produced in cells with fully operative mutation-prevention systems. By using a mutation assay that allows visualization of DNA replication errors and stress response transcriptional reporters, we examined populations of isogenic Escherichia coli cells growing under optimal conditions without exogenous stress. We found that spontaneous DNA replication errors in proliferating cells arose more frequently in subpopulations experiencing endogenous stresses, such as problems with proteostasis, genome maintenance, and reactive oxidative species production. The presence of these subpopulations of phenotypic mutators is not expected to affect the average mutation frequency or to reduce the mean population fitness in a stable environment. However, these subpopulations can contribute to overall population adaptability in fluctuating environments by serving as a reservoir of increased genetic variability.

  6. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  7. Error correction in short time steps during the application of quantum gates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castro, L.A. de, E-mail: leonardo.castro@usp.br; Napolitano, R.D.J.

    2016-04-15

    We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for themore » cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.« less

  8. Measurement and analysis of operating system fault tolerance

    NASA Technical Reports Server (NTRS)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  9. A Systems Modeling Approach for Risk Management of Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2012-01-01

    The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

  10. Reducing Errors in Satellite Simulated Views of Clouds with an Improved Parameterization of Unresolved Scales

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Marchand, R.; Ackerman, T. P.

    2016-12-01

    Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A

  11. Using the principles of circadian physiology enhances shift schedule design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connolly, J.J.; Moore-Ede, M.C.

    1987-01-01

    Nuclear power plants must operate 24 h, 7 days a week. For the most part, shift schedules currently in use at nuclear power plants have been designed to meet operational needs without considering the biological clocks of the human operators. The development of schedules that also take circadian principles into account is a positive step that can be taken to improve plant safety by optimizing operator alertness. These schedules reduce the probability of human errors especially during backshifts. In addition, training programs that teach round-the-clock workers how to deal with the problems of shiftwork can help to optimize performance andmore » alertness. These programs teach shiftworkers the underlying causes of the sleep problems associated with shiftwork and also provide coping strategies for improving sleep and dealing with the transition between shifts. When these training programs are coupled with an improved schedule, the problems associated with working round-the-clock can be significantly reduced.« less

  12. Cost effectiveness of stream-gaging program in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.

    1985-01-01

    This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.

  13. Evaluation of Trajectory Errors in an Automated Terminal-Area Environment

    NASA Technical Reports Server (NTRS)

    Oseguera-Lohr, Rosa M.; Williams, David H.

    2003-01-01

    A piloted simulation experiment was conducted to document the trajectory errors associated with use of an airplane's Flight Management System (FMS) in conjunction with a ground-based ATC automation system, Center-TRACON Automation System (CTAS) in the terminal area. Three different arrival procedures were compared: current-day (vectors from ATC), modified (current-day with minor updates), and data link with FMS lateral navigation. Six active airline pilots flew simulated arrivals in a fixed-base simulator. The FMS-datalink procedure resulted in the smallest time and path distance errors, indicating that use of this procedure could reduce the CTAS arrival-time prediction error by about half over the current-day procedure. Significant sources of error contributing to the arrival-time error were crosstrack errors and early speed reduction in the last 2-4 miles before the final approach fix. Pilot comments were all very positive, indicating the FMS-datalink procedure was easy to understand and use, and the increased head-down time and workload did not detract from the benefit. Issues that need to be resolved before this method of operation would be ready for commercial use include development of procedures acceptable to controllers, better speed conformance monitoring, and FMS database procedures to support the approach transitions.

  14. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    PubMed

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  15. Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip

    NASA Astrophysics Data System (ADS)

    Jung, In-Seok

    MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.

  16. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  17. Quantum cryptographic system with reduced data loss

    DOEpatents

    Lo, H.K.; Chau, H.F.

    1998-03-24

    A secure method for distributing a random cryptographic key with reduced data loss is disclosed. Traditional quantum key distribution systems employ similar probabilities for the different communication modes and thus reject at least half of the transmitted data. The invention substantially reduces the amount of discarded data (those that are encoded and decoded in different communication modes e.g. using different operators) in quantum key distribution without compromising security by using significantly different probabilities for the different communication modes. Data is separated into various sets according to the actual operators used in the encoding and decoding process and the error rate for each set is determined individually. The invention increases the key distribution rate of the BB84 key distribution scheme proposed by Bennett and Brassard in 1984. Using the invention, the key distribution rate increases with the number of quantum signals transmitted and can be doubled asymptotically. 23 figs.

  18. Quantum cryptographic system with reduced data loss

    DOEpatents

    Lo, Hoi-Kwong; Chau, Hoi Fung

    1998-01-01

    A secure method for distributing a random cryptographic key with reduced data loss. Traditional quantum key distribution systems employ similar probabilities for the different communication modes and thus reject at least half of the transmitted data. The invention substantially reduces the amount of discarded data (those that are encoded and decoded in different communication modes e.g. using different operators) in quantum key distribution without compromising security by using significantly different probabilities for the different communication modes. Data is separated into various sets according to the actual operators used in the encoding and decoding process and the error rate for each set is determined individually. The invention increases the key distribution rate of the BB84 key distribution scheme proposed by Bennett and Brassard in 1984. Using the invention, the key distribution rate increases with the number of quantum signals transmitted and can be doubled asymptotically.

  19. Memory protection

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate.

  20. How to improve an un-alterable model forecast? A sequential data assimilation based error updating approach

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.

    2012-12-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.

  1. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  2. Adaptive Offset Correction for Intracortical Brain Computer Interfaces

    PubMed Central

    Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.

    2014-01-01

    Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868

  3. Adaptive offset correction for intracortical brain-computer interfaces.

    PubMed

    Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R

    2014-03-01

    Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

  4. Cost effectiveness of the stream-gaging program in Louisiana

    USGS Publications Warehouse

    Herbert, R.A.; Carlson, D.D.

    1985-01-01

    This report documents the results of a study of the cost effectiveness of the stream-gaging program in Louisiana. Data uses and funding sources were identified for the 68 continuous-record stream gages currently (1984) in operation with a budget of $408,700. Three stream gages have uses specific to a short-term study with no need for continued data collection beyond the study. The remaining 65 stations should be maintained in the program for the foreseeable future. In addition to the current operation of continuous-record stations, a number of wells, flood-profile gages, crest-stage gages, and stage stations, are serviced on the continuous-record station routes; thus, increasing the current budget to $423,000. The average standard error of estimate for data collected at the stations is 34.6%. Standard errors computed in this study are one measure of streamflow errors, and can be used as guidelines in comparing the effectiveness of alternative networks. By using the routes and number of measurements prescribed by the ' Traveling Hydrographer Program, ' the standard error could be reduced to 31.5% with the current budget of $423,000. If the gaging resources are redistributed, the 34.6% overall level of accuracy at the 68 continuous-record sites and the servicing of the additional wells or gages could be maintained with a budget of approximately $410,000. (USGS)

  5. Modeling to Improve the Risk Reduction Process for Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Bryant, Larry; Waggoner, Bruce

    2013-01-01

    The Jet Propulsion Laboratory has learned that even innocuous errors in the spacecraft command process can have significantly detrimental effects on a space mission. Consequently, such Command File Errors (CFE), regardless of their effect on the spacecraft, are treated as significant events for which a root cause is identified and corrected. A CFE during space mission operations is often the symptom of imbalance or inadequacy within the system that encompasses the hardware and software used for command generation as well as the human experts and processes involved in this endeavor. As we move into an era of increased collaboration with other NASA centers and commercial partners, these systems become more and more complex. Consequently, the ability to thoroughly model and analyze CFEs formally in order to reduce the risk they pose is increasingly important. In this paper, we summarize the results of applying modeling techniques previously developed to the DAWN flight project. The original models were built with the input of subject matter experts from several flight projects. We have now customized these models to address specific questions for the DAWN flight project and formulating use cases to address their unique mission needs. The goal of this effort is to enhance the project's ability to meet commanding reliability requirements for operations and to assist them in managing their Command File Errors.

  6. Mission operations and command assurance: Flight operations quality improvements

    NASA Technical Reports Server (NTRS)

    Welz, Linda L.; Bruno, Kristin J.; Kazz, Sheri L.; Potts, Sherrill S.; Witkowski, Mona M.

    1994-01-01

    Mission Operations and Command Assurance (MO&CA) is a Total Quality Management (TQM) task on JPL projects to instill quality in flight mission operations. From a system engineering view, MO&CA facilitates communication and problem-solving among flight teams and provides continuous solving among flight teams and provides continuous process improvement to reduce risk in mission operations by addressing human factors. The MO&CA task has evolved from participating as a member of the spacecraft team, to an independent team reporting directly to flight project management and providing system level assurance. JPL flight projects have benefited significantly from MO&CA's effort to contain risk and prevent rather than rework errors. MO&CA's ability to provide direct transfer of knowledge allows new projects to benefit from previous and ongoing flight experience.

  7. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  8. Increased instrument intelligence--can it reduce laboratory error?

    PubMed

    Jekelis, Albert W

    2005-01-01

    Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.

  9. Mission operations and command assurance: Instilling quality into flight operations

    NASA Technical Reports Server (NTRS)

    Welz, Linda L.; Witkowski, Mona M.; Bruno, Kristin J.; Potts, Sherrill S.

    1993-01-01

    Mission Operations and Command Assurance (MO&CA) is a Total Quality Management (TQM) task on JPL projects to instill quality in flight mission operations. From a system engineering view, MO&CA facilitates communication and problem-solving among flight teams and provides continuous process improvement to reduce the probability of radiating incorrect commands to a spacecraft. The MO&CA task has evolved from participating as a member of the spacecraft team to an independent team reporting directly to flight project management and providing system level assurance. JPL flight projects have benefited significantly from MO&CA's effort to contain risk and prevent rather than rework errors. MO&CA's ability to provide direct transfer of knowledge allows new projects to benefit from previous and ongoing flight experience.

  10. FDDI network test adaptor error injection circuit

    NASA Technical Reports Server (NTRS)

    Eckenrode, Thomas (Inventor); Stauffer, David R. (Inventor); Stempski, Rebecca (Inventor)

    1994-01-01

    An apparatus for injecting errors into a FDDI token ring network is disclosed. The error injection scheme operates by fooling a FORMAC into thinking it sent a real frame of data. This is done by using two RAM buffers. The RAM buffer normally accessed by the RBC/DPC becomes a SHADOW RAM during error injection operation. A dummy frame is loaded into the shadow RAM in order to fool the FORMAC. This data is just like the data that would be used if sending a normal frame, with the restriction that it must be shorter than the error injection data. The other buffer, the error injection RAM, contains the error injection frame. The error injection data is sent out to the media by switching a multiplexor. When the FORMAC is done transmitting the data, the multiplexor is switched back to the normal mode. Thus, the FORMAC is unaware of what happened and the token ring remains operational.

  11. Arithmetic functions in torus and tree networks

    DOEpatents

    Bhanot, Gyan; Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.

    2007-12-25

    Methods and systems for performing arithmetic functions. In accordance with a first aspect of the invention, methods and apparatus are provided, working in conjunction of software algorithms and hardware implementation of class network routing, to achieve a very significant reduction in the time required for global arithmetic operation on the torus. Therefore, it leads to greater scalability of applications running on large parallel machines. The invention involves three steps in improving the efficiency and accuracy of global operations: (1) Ensuring, when necessary, that all the nodes do the global operation on the data in the same order and so obtain a unique answer, independent of roundoff error; (2) Using the topology of the torus to minimize the number of hops and the bidirectional capabilities of the network to reduce the number of time steps in the data transfer operation to an absolute minimum; and (3) Using class function routing to reduce latency in the data transfer. With the method of this invention, every single element is injected into the network only once and it will be stored and forwarded without any further software overhead. In accordance with a second aspect of the invention, methods and systems are provided to efficiently implement global arithmetic operations on a network that supports the global combining operations. The latency of doing such global operations are greatly reduced by using these methods.

  12. Blade Sections in Streamwise Oscillations into Reverse Flow

    DTIC Science & Technology

    2015-05-07

    NC 27709-2211 Reverse Flow, Oscillating Airfoils , Oscillating Freesteam REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR...plate or bluff body rather than an airfoil . Reverse flow operation requires investigation and quantification to accurately capture these Submitted for... airfoil integrated quantities (lift, drag, moment) in reverse flow and developed new algorithms for comprehensive codes, reducing errors from 30 %–50

  13. Reduced Kalman Filters for Clock Ensembles

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    2011-01-01

    This paper summarizes the author's work ontimescales based on Kalman filters that act upon the clock comparisons. The natural Kalman timescale algorithm tends to optimize long-term timescale stability at the expense of short-term stability. By subjecting each post-measurement error covariance matrix to a non-transparent reduction operation, one obtains corrected clocks with improved short-term stability and little sacrifice of long-term stability.

  14. The systems approach to error reduction: factors influencing inoculation injury reporting in the operating theatre.

    PubMed

    Cutter, Jayne; Jordan, Sue

    2013-11-01

    To examine the frequency of, and factors influencing, reporting of mucocutaneous and percutaneous injuries in operating theatres. Surgeons and peri-operative nurses risk acquiring blood-borne viral infections during surgical procedures. Appropriate first-aid and prophylactic treatment after an injury can significantly reduce the risk of infection. However, studies indicate that injuries often go unreported. The 'systems approach' to error reduction relies on reporting incidents and near misses. Failure to report will compromise safety. A postal survey of all surgeons and peri-operative nurses engaged in exposure prone procedures in nine Welsh hospitals, face-to-face interviews with selected participants and telephone interviews with Infection Control Nurses. The response rate was 51.47% (315/612). Most respondents reported one or more percutaneous (183/315, 58.1%) and/or mucocutaneous injuries (68/315, 21.6%) in the 5 years preceding the study. Only 54.9% (112/204) reported every injury. Surgeons were poorer at reporting: 70/133 (52.6%) reported all or >50% of their injuries compared with 65/71 nurses (91.5%). Injuries are frequently under-reported, possibly compromising safety in operating theatres. A significant number of inoculation injuries are not reported. Factors influencing under-reporting were identified. This knowledge can assist managers in improving reporting and encouraging a robust safety culture within operating departments. © 2012 John Wiley & Sons Ltd.

  15. Design considerations to improve cognitive ergonomic issues of unmanned vehicle interfaces utilizing video game controllers.

    PubMed

    Oppold, P; Rupp, M; Mouloua, M; Hancock, P A; Martin, J

    2012-01-01

    Unmanned (UAVs, UCAVs, and UGVs) systems still have major human factors and ergonomic challenges related to the effective design of their control interface systems, crucial to their efficient operation, maintenance, and safety. Unmanned system interfaces with a human centered approach promote intuitive interfaces that are easier to learn, and reduce human errors and other cognitive ergonomic issues with interface design. Automation has shifted workload from physical to cognitive, thus control interfaces for unmanned systems need to reduce mental workload on the operators and facilitate the interaction between vehicle and operator. Two-handed video game controllers provide wide usability within the overall population, prior exposure for new operators, and a variety of interface complexity levels to match the complexity level of the task and reduce cognitive load. This paper categorizes and provides taxonomy for 121 haptic interfaces from the entertainment industry that can be utilized as control interfaces for unmanned systems. Five categories of controllers were based on the complexity of the buttons, control pads, joysticks, and switches on the controller. This allows the selection of the level of complexity needed for a specific task without creating an entirely new design or utilizing an overly complex design.

  16. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  17. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE PAGES

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...

    2017-10-27

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  18. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.

    2017-10-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  19. Bias estimation for the Landsat 8 operational land imager

    USGS Publications Warehouse

    Morfitt, Ron; Vanderwerff, Kelly

    2011-01-01

    The Operational Land Imager (OLI) is a pushbroom sensor that will be a part of the Landsat Data Continuity Mission (LDCM). This instrument is the latest in the line of Landsat imagers, and will continue to expand the archive of calibrated earth imagery. An important step in producing a calibrated image from instrument data is accurately accounting for the bias of the imaging detectors. Bias variability is one factor that contributes to error in bias estimation for OLI. Typically, the bias is simply estimated by averaging dark data on a per-detector basis. However, data acquired during OLI pre-launch testing exhibited bias variation that correlated well with the variation in concurrently collected data from a special set of detectors on the focal plane. These detectors are sensitive to certain electronic effects but not directly to incoming electromagnetic radiation. A method of using data from these special detectors to estimate the bias of the imaging detectors was developed, but found not to be beneficial at typical radiance levels as the detectors respond slightly when the focal plane is illuminated. In addition to bias variability, a systematic bias error is introduced by the truncation performed by the spacecraft of the 14-bit instrument data to 12-bit integers. This systematic error can be estimated and removed on average, but the per pixel quantization error remains. This paper describes the variability of the bias, the effectiveness of a new approach to estimate and compensate for it, as well as the errors due to truncation and how they are reduced.

  20. Implementation of a MFAC based position sensorless drive for high speed BLDC motors with nonideal back EMF.

    PubMed

    Li, Haitao; Ning, Xin; Li, Wenzhuo

    2017-03-01

    In order to improve the reliability and reduce power consumption of the high speed BLDC motor system, this paper presents a model free adaptive control (MFAC) based position sensorless drive with only a dc-link current sensor. The initial commutation points are obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. According to the commutation error caused by the low pass filter (LPF) and other factors, the relationship between commutation error angle and dc-link current is analyzed, a corresponding MFAC based control method is proposed, and the commutation error can be corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Mitigating the Impacts of Climate Nonstationarity on Seasonal Streamflow Predictability in the U.S. Southwest

    NASA Astrophysics Data System (ADS)

    Lehner, Flavio; Wood, Andrew W.; Llewellyn, Dagmar; Blatchford, Douglas B.; Goodbody, Angus G.; Pappenberger, Florian

    2017-12-01

    Seasonal streamflow predictions provide a critical management tool for water managers in the American Southwest. In recent decades, persistent prediction errors for spring and summer runoff volumes have been observed in a number of watersheds in the American Southwest. While mostly driven by decadal precipitation trends, these errors also relate to the influence of increasing temperature on streamflow in these basins. Here we show that incorporating seasonal temperature forecasts from operational global climate prediction models into streamflow forecasting models adds prediction skill for watersheds in the headwaters of the Colorado and Rio Grande River basins. Current dynamical seasonal temperature forecasts now show sufficient skill to reduce streamflow forecast errors in snowmelt-driven regions. Such predictions can increase the resilience of streamflow forecasting and water management systems in the face of continuing warming as well as decadal-scale temperature variability and thus help to mitigate the impacts of climate nonstationarity on streamflow predictability.

  2. Pediatric Drowning: A Standard Operating Procedure to Aid the Prehospital Management of Pediatric Cardiac Arrest Resulting From Submersion.

    PubMed

    Best, Rebecca R; Harris, Benjamin H L; Walsh, Jason L; Manfield, Timothy

    2017-05-08

    Drowning is one of the leading causes of death in children. Resuscitating a child following submersion is a high-pressure situation, and standard operating procedures can reduce error. Currently, the Resuscitation Council UK guidance does not include a standard operating procedure on pediatric drowning. The objective of this project was to design a standard operating procedure to improve outcomes of drowned children. A literature review on the management of pediatric drowning was conducted. Relevant publications were used to develop a standard operating procedure for management of pediatric drowning. A concise standard operating procedure was developed for resuscitation following pediatric submersion. Specific recommendations include the following: the Heimlich maneuver should not be used in this context; however, prolonged resuscitation and therapeutic hypothermia are recommended. This standard operating procedure is a potentially useful adjunct to the Resuscitation Council UK guidance and should be considered for incorporation into its next iteration.

  3. Upgrades to Electronic Speckle Interferometer (ESPI) Operation and Data Analysis at NASA's Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Connelly, Joseph; Blake, Peter; Jones, Joycelyn

    2008-01-01

    The authors report operational upgrades and streamlined data analysis of a commissioned electronic speckle interferometer (ESPI) in a permanent in-house facility at NASA's Goddard Space Flight Center. Our ESPI was commercially purchased for use by the James Webb Space Telescope (JWST) development team. We have quantified and reduced systematic error sources, improved the software operability with a user-friendly graphic interface, developed an instrument simulator, streamlined data analysis for long-duration testing, and implemented a turn-key approach to speckle interferometry. We also summarize results from a test of the JWST support structure (previously published), and present new results from several pieces of test hardware at various environmental conditions.

  4. Liquid crystal point diffraction interferometer. Ph.D. Thesis - Arizona Univ., 1995

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.

    1995-01-01

    A new instrument, the liquid crystal point diffraction-interferometer (LCPDI), has been developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point diffraction interferometer (PDI) and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wavefronts with very high data density and with automated data reduction. This dissertation describes the theory of both the PDI and liquid crystal phase control. The design considerations for the LCPDI are presented, including manufacturing considerations. The operation and performance of the LCPDI are discussed, including sections regarding alignment, calibration, and amplitude modulation effects. The LCPDI is then demonstrated using two phase objects: defocus difference wavefront, and a temperature distribution across a heated chamber filled with silicone oil. The measured results are compared to theoretical or independently measured results and show excellent agreement. A computer simulation of the LCPDI was performed to verify the source of observed periodic phase measurement error. The error stems from intensity variations caused by dye molecules rotating within the liquid crystal layer. Methods are discussed for reducing this error. Algorithms are presented which reduce this error; they are also useful for any phase-stepping interferometer that has unwanted intensity fluctuations, such as those caused by unregulated lasers.

  5. Analysis of naturalistic driving videos of fleet services drivers to estimate driver error and potentially distracting behaviors as risk factors for rear-end versus angle crashes.

    PubMed

    Harland, Karisa K; Carney, Cher; McGehee, Daniel

    2016-07-03

    The objective of this study was to estimate the prevalence and odds of fleet driver errors and potentially distracting behaviors just prior to rear-end versus angle crashes. Analysis of naturalistic driving videos among fleet services drivers for errors and potentially distracting behaviors occurring in the 6 s before crash impact. Categorical variables were examined using the Pearson's chi-square test, and continuous variables, such as eyes-off-road time, were compared using the Student's t-test. Multivariable logistic regression was used to estimate the odds of a driver error or potentially distracting behavior being present in the seconds before rear-end versus angle crashes. Of the 229 crashes analyzed, 101 (44%) were rear-end and 128 (56%) were angle crashes. Driver age, gender, and presence of passengers did not differ significantly by crash type. Over 95% of rear-end crashes involved inadequate surveillance compared to only 52% of angle crashes (P < .0001). Almost 65% of rear-end crashes involved a potentially distracting driver behavior, whereas less than 40% of angle crashes involved these behaviors (P < .01). On average, drivers spent 4.4 s with their eyes off the road while operating or manipulating their cell phone. Drivers in rear-end crashes were at 3.06 (95% confidence interval [CI], 1.73-5.44) times adjusted higher odds of being potentially distracted than those in angle crashes. Fleet driver driving errors and potentially distracting behaviors are frequent. This analysis provides data to inform safe driving interventions for fleet services drivers. Further research is needed in effective interventions to reduce the likelihood of drivers' distracting behaviors and errors that may potentially reducing crashes.

  6. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  7. Testing of a novel pin array guide for accurate three-dimensional glenoid component positioning.

    PubMed

    Lewis, Gregory S; Stevens, Nicole M; Armstrong, April D

    2015-12-01

    A substantial challenge in total shoulder replacement is accurate positioning and alignment of the glenoid component. This challenge arises from limited intraoperative exposure and complex arthritic-driven deformity. We describe a novel pin array guide and method for patient-specific guiding of the glenoid central drill hole. We also experimentally tested the hypothesis that this method would reduce errors in version and inclination compared with 2 traditional methods. Polymer models of glenoids were created from computed tomography scans from 9 arthritic patients. Each 3-dimensional (3D) printed scapula was shrouded to simulate the operative situation. Three different methods for central drill alignment were tested, all with the target orientation of 5° retroversion and 0° inclination: no assistance, assistance by preoperative 3D imaging, and assistance by the pin array guide. Version and inclination errors of the drill line were compared. Version errors using the pin array guide (3° ± 2°) were significantly lower than version errors associated with no assistance (9° ± 7°) and preoperative 3D imaging (8° ± 6°). Inclination errors were also significantly lower using the pin array guide compared with no assistance. The new pin array guide substantially reduced errors in orientation of the central drill line. The guide method is patient specific but does not require rapid prototyping and instead uses adjustments to an array of pins based on automated software calculations. This method may ultimately provide a cost-effective solution enabling surgeons to obtain accurate orientation of the glenoid. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  8. Effect of satellite formations and imaging modes on global albedo estimation

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.

    2016-05-01

    We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.

  9. Calibration of misalignment errors in the non-null interferometry based on reverse iteration optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xinmu; Hao, Qun; Hu, Yao; Wang, Shaopu; Ning, Yan; Li, Tengfei; Chen, Shufen

    2017-10-01

    With no necessity of compensating the whole aberration introduced by the aspheric surfaces, non-null test has the advantage over null test in applicability. However, retrace error, which is brought by the path difference between the rays reflected from the surface under test (SUT) and the incident rays, is introduced into the measurement and makes up of the residual wavefront aberrations (RWAs) along with surface figure error (SFE), misalignment error and other influences. Being difficult to separate from RWAs, the misalignment error may remain after measurement and it is hard to identify whether it is removed or not. It is a primary task to study the removal of misalignment error. A brief demonstration of digital Moiré interferometric technique is presented and a calibration method for misalignment error on the basis of reverse iteration optimization (RIO) algorithm in non-null test method is addressed. The proposed method operates mostly in the virtual system, and requires no accurate adjustment in the real interferometer, which is of significant advantage in reducing the errors brought by repeating complicated manual adjustment, furthermore improving the accuracy of the aspheric surface test. Simulation verification is done in this paper. The calibration accuracy of the position and attitude can achieve at least a magnitude of 10-5 mm and 0.0056×10-6rad, respectively. The simulation demonstrates that the influence of misalignment error can be precisely calculated and removed after calibration.

  10. Wind Power Forecasting Error Distributions: An International Comparison; Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, B. M.; Lew, D.; Milligan, M.

    2012-09-01

    Wind power forecasting is expected to be an important enabler for greater penetration of wind power into electricity systems. Because no wind forecasting system is perfect, a thorough understanding of the errors that do occur can be critical to system operation functions, such as the setting of operating reserve levels. This paper provides an international comparison of the distribution of wind power forecasting errors from operational systems, based on real forecast data. The paper concludes with an assessment of similarities and differences between the errors observed in different locations.

  11. Using snowball sampling method with nurses to understand medication administration errors.

    PubMed

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.

  12. Reduced-cost second-order algebraic-diagrammatic construction method for excitation energies and transition moments

    NASA Astrophysics Data System (ADS)

    Mester, Dávid; Nagy, Péter R.; Kállay, Mihály

    2018-03-01

    A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.

  13. Method of calibrating an interferometer and reducing its systematic noise

    NASA Technical Reports Server (NTRS)

    Hammer, Philip D. (Inventor)

    1997-01-01

    Methods of operation and data analysis for an interferometer so as to eliminate the errors contributed by non-responsive or unstable pixels, interpixel gain variations that drift over time, and spurious noise that would otherwise degrade the operation of the interferometer are disclosed. The methods provide for either online or post-processing calibration. The methods apply prescribed reversible transformations that exploit the physical properties of interferograms obtained from said interferometer to derive a calibration reference signal for subsequent treatment of said interferograms for interpixel gain variations. A self-consistent approach for treating bad pixels is incorporated into the methods.

  14. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets.

    PubMed

    Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.

  15. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets

    PubMed Central

    Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586

  16. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE PAGES

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...

    2017-02-15

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  17. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  18. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  19. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  20. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    PubMed Central

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter

    2017-01-01

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466

  1. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  2. Anticipatory Neurofuzzy Control

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1994-01-01

    Technique of feedback control, called "anticipatory neurofuzzy control," developed for use in controlling flexible structures and other dynamic systems for which mathematical models of dynamics poorly known or unknown. Superior ability to act during operation to compensate for, and adapt to, errors in mathematical model of dynamics, changes in dynamics, and noise. Also offers advantage of reduced computing time. Hybrid of two older fuzzy-logic control techniques: standard fuzzy control and predictive fuzzy control.

  3. Summary of REAC Experience

    DTIC Science & Technology

    1949-09-08

    error from this source can be substantially reduced by the use of polystyrene insulating materials in the plugboard system of problem patching (Section...present at some point in the machine (see Section 5b). -10- ( 1 d ~ PLUGBOARD Our experience ·with the operation of the REAC indicates that...utilization of the machine could be very significantly increased by a drastic revision of the patch bay. We propose to install a separable plugboard which

  4. Trauma Pod/Operating Room of the Future

    DTIC Science & Technology

    2006-02-01

    into C++ objects. OpenBinder software provided by ORNL was also used. This approach reduces the potential errors that might be introduced by...publications can be found here. OSCAR has been used by developers at the Univ. of Texas, ORNL , NASA/Ames, and NASA/JSC. RRGKinematix, a single...the last DH frame (at the wrist) is 70 mm. Position Travel Limits (degrees) - these are software limits as specified by ORNL Joint 1

  5. [Automatic Extraction and Analysis of Dosimetry Data in Radiotherapy Plans].

    PubMed

    Song, Wei; Zhao, Di; Lu, Hong; Zhang, Biyun; Ma, Jun; Yu, Dahai

    To improve the efficiency and accuracy of extraction and analysis of dosimetry data in radiotherapy plans for a batch of patients. With the interface function provided in Matlab platform, a program was written to extract the dosimetry data exported from treatment planning system in DICOM RT format and exported the dose-volume data to an Excel file with the SPSS compatible format. This method was compared with manual operation for 14 gastric carcinoma patients to validate the efficiency and accuracy. The output Excel data were compatible with SPSS in format, the dosimetry data error for PTV dose interval of 90%-98%, PTV dose interval of 99%-106% and all OARs were -3.48E-5 ± 3.01E-5, -1.11E-3 ± 7.68E-4, -7.85E-5 ± 9.91E-5 respectively. Compared with manual operation, the time required was reduced from 5.3 h to 0.19 h and input error was reduced from 0.002 to 0. The automatic extraction of dosimetry data in DICOM RT format for batch patients, the SPSS compatible data exportation, quick analysis were achieved in this paper. The efficiency of clinical researches based on dosimetry data analysis of large number of patients will be improved with this methods.

  6. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DOE PAGES

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte; ...

    2017-05-17

    This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less

  7. An Investigation into the Cognition Behind Spontaneous String Pulling in New Caledonian Crows

    PubMed Central

    Taylor, Alex H.; Medina, Felipe S.; Holzhaider, Jennifer C.; Hearne, Lindsay J.; Hunt, Gavin R.; Gray, Russell D.

    2010-01-01

    The ability of some bird species to pull up meat hung on a string is a famous example of spontaneous animal problem solving. The “insight” hypothesis claims that this complex behaviour is based on cognitive abilities such as mental scenario building and imagination. An operant conditioning account, in contrast, would claim that this spontaneity is due to each action in string pulling being reinforced by the meat moving closer and remaining closer to the bird on the perch. We presented experienced and naïve New Caledonian crows with a novel, visually restricted string-pulling problem that reduced the quality of visual feedback during string pulling. Experienced crows solved this problem with reduced efficiency and increased errors compared to their performance in standard string pulling. Naïve crows either failed or solved the problem by trial and error learning. However, when visual feedback was available via a mirror mounted next to the apparatus, two naïve crows were able to perform at the same level as the experienced group. Our results raise the possibility that spontaneous string pulling in New Caledonian crows may not be based on insight but on operant conditioning mediated by a perceptual-motor feedback cycle. PMID:20179759

  8. 3D acquisition and modeling for flint artefacts analysis

    NASA Astrophysics Data System (ADS)

    Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.

    2007-07-01

    In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.

  9. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte

    This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less

  10. Usability Tests in Medicine: A Cost-Benefit Analysis for Hospitals Before Acquiring Medical Devices for Theatre.

    PubMed

    Gonser, Phillipp; Fuchsberger, Thomas; Matern, Ulrich

    2017-08-01

    The use of active medical devices in clinical routine should be as safe and efficient as possible. Usability tests (UTs) help improve these aspects of medical devices during their development, but UTs can be of use for hospitals even after a product has been launched. The present pilot study examines the costs and possible benefits of UT for hospitals before buying new medical devices for theatre. Two active medical devices with different complexity were tested in a standardized UT and a cost-benefit analysis was carried out assuming a different device bought at the same price with a higher usability could increase the efficiency of task solving and due to that save valuable theatre time. The cost of the UT amounted up to €19.400. Hospitals could benefit from UTs before buying new devices for theatre by reducing time-consuming operator errors and thereby increase productivity and patient safety. The possible benefits amounted from €23.300 to €1.570.000 (median = €797.000). Not only hospitals could benefit economically from investing in a UT before deciding to buy a medical device, but especially patients would profit from a higher usability by reducing possible operator errors and increase safety and performance of use.

  11. Performance consequences of alternating directional control-response compatibility: evidence from a coal mine shuttle car simulator.

    PubMed

    Zupanc, Christine M; Burgess-Limerick, Robin J; Wallis, Guy

    2007-08-01

    To investigate error and reaction time consequences of alternating compatible and incompatible steering arrangements during a simulated obstacle avoidance task. Underground coal mine shuttle cars provide an example of a vehicle in which operators are required to alternate between compatible and incompatible steering configurations. This experiment examines the performance of 48 novice participants in a virtual analogy of an underground coal mine shuttle car. Participants were randomly assigned to a compatible condition, an incompatible condition, an alternating condition in which compatibility alternated within and between hands, or an alternating condition in which compatibility alternated between hands. Participants made fewer steering direction errors and made correct steering responses more quickly in the compatible condition. Error rate decreased over time in the incompatible condition. A compatibility effect for both errors and reaction time was also found when the control-response relationship alternated; however, performance improvements over time were not consistent. Isolating compatibility to a hand resulted in reduced error rate and faster reaction time than when compatibility alternated within and between hands. The consequences of alternating control-response relationships are higher error rates and slower responses, at least in the early stages of learning. This research highlights the importance of ensuring consistently compatible human-machine directional control-response relationships.

  12. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    NASA Astrophysics Data System (ADS)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  13. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of

  14. Cleared for the visual approach: Human factor problems in air carrier operations

    NASA Technical Reports Server (NTRS)

    Monan, W. P.

    1983-01-01

    The study described herein, a set of 353 ASRS reports of unique aviation occurrences significantly involving visual approaches was examined to identify hazards and pitfalls embedded in the visual approach procedure and to consider operational practices that might help avoid future mishaps. Analysis of the report set identified nine aspects of the visual approach procedure that appeared to be predisposing conditions for inducing or exacerbating the effects of operational errors by flight crew members or controllers. Predisposing conditions, errors, and operational consequences of the errors are discussed. In a summary, operational policies that might mitigate the problems are examined.

  15. Evaluation of heterotrophic plate and chromogenic agar colony counting in water quality laboratories.

    PubMed

    Hallas, Gary; Monis, Paul

    2015-01-01

    The enumeration of bacteria using plate-based counts is a core technique used by food and water microbiology testing laboratories. However, manual counting of bacterial colonies is both time and labour intensive, can vary between operators and also requires manual entry of results into laboratory information management systems, which can be a source of data entry error. An alternative is to use automated digital colony counters, but there is a lack of peer-reviewed validation data to allow incorporation into standards. We compared the performance of digital counting technology (ProtoCOL3) against manual counting using criteria defined in internationally recognized standard methods. Digital colony counting provided a robust, standardized system suitable for adoption in a commercial testing environment. The digital technology has several advantages:•Improved measurement of uncertainty by using a standard and consistent counting methodology with less operator error.•Efficiency for labour and time (reduced cost).•Elimination of manual entry of data onto LIMS.•Faster result reporting to customers.

  16. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  17. Effect of accuracy of wind power prediction on power system operator

    NASA Technical Reports Server (NTRS)

    Schlueter, R. A.; Sigari, G.; Costi, T.

    1985-01-01

    This research project proposed a modified unit commitment that schedules connection and disconnection of generating units in response to load. A modified generation control is also proposed that controls steam units under automatic generation control, fast responding diesels, gas turbines and hydro units under a feedforward control, and wind turbine array output under a closed loop array control. This modified generation control and unit commitment require prediction of trend wind power variation one hour ahead and the prediction of error in this trend wind power prediction one half hour ahead. An improved meter for predicting trend wind speed variation is developed. Methods for accurately simulating the wind array power from a limited number of wind speed prediction records was developed. Finally, two methods for predicting the error in the trend wind power prediction were developed. This research provides a foundation for testing and evaluating the modified unit commitment and generation control that was developed to maintain operating reliability at a greatly reduced overall production cost for utilities with wind generation capacity.

  18. The 'problem' with automation - Inappropriate feedback and interaction, not 'over-automation'

    NASA Technical Reports Server (NTRS)

    Norman, D. A.

    1990-01-01

    Automation in high-risk industry is often blamed for causing harm and increasing the chance of human error when failures occur. It is proposed that the problem is not the presence of automation, but rather its inappropriate design. The problem is that the operations are performed appropriately under normal conditions, but there is inadequate feedback and interaction with the humans who must control the overall conduct of the task. The problem is that the automation is at an intermediate level of intelligence, powerful enough to take over control which used to be done by people, but not powerful enough to handle all abnormalities. Moreover, its level of intelligence is insufficient to provide the continual, appropriate feedback that occurs naturally among human operators. To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate. The overall message is that it is possible to reduce error through appropriate design considerations.

  19. Highly precise acoustic calibration method of ring-shaped ultrasound transducer array for plane-wave-based ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Terada, Takahide; Yamanaka, Kazuhiro; Suzuki, Atsuro; Tsubota, Yushi; Wu, Wenjing; Kawabata, Ken-ichi

    2017-07-01

    Ultrasound computed tomography (USCT) is promising for a non-invasive, painless, operator-independent and quantitative system for breast-cancer screening. Assembly error, production tolerance, and aging-degradation variations of the hardwire components, particularly of plane-wave-based USCT systems, may hamper cost effectiveness, precise imaging, and robust operation. The plane wave is transmitted from a ring-shaped transducer array for receiving the signal at a high signal-to-noise-ratio and fast aperture synthesis. There are four signal-delay components: response delays in the transmitters and receivers and propagation delays depending on the positions of the transducer elements and their directivity. We developed a highly precise calibration method for calibrating these delay components and evaluated it with our prototype plane-wave-based USCT system. Our calibration method was found to be effective in reducing delay errors. Gaps and curves were eliminated from the plane wave, and echo images of wires were sharpened in the entire imaging area.

  20. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  1. Advanced Technology Training System on Motor-Operated Valves

    NASA Technical Reports Server (NTRS)

    Wiederholt, Bradley J.; Widjaja, T. Kiki; Yasutake, Joseph Y.; Isoda, Hachiro

    1993-01-01

    This paper describes how features from the field of Intelligent Tutoring Systems are applied to the Motor-Operated Valve (MOV) Advanced Technology Training System (ATTS). The MOV ATTS is a training system developed at Galaxy Scientific Corporation for the Central Research Institute of Electric Power Industry in Japan and the Electric Power Research Institute in the United States. The MOV ATTS combines traditional computer-based training approaches with system simulation, integrated expert systems, and student and expert modeling. The primary goal of the MOV ATTS is to reduce human errors that occur during MOV overhaul and repair. The MOV ATTS addresses this goal by providing basic operational information of the MOV, simulating MOV operation, providing troubleshooting practice of MOV failures, and tailoring this training to the needs of each individual student. The MOV ATTS integrates multiple expert models (functional and procedural) to provide advice and feedback to students. The integration also provides expert model validation support to developers. Student modeling is supported by two separate student models: one model registers and updates the student's current knowledge of basic MOV information, while another model logs the student's actions and errors during troubleshooting exercises. These two models are used to provide tailored feedback to the student during the MOV course.

  2. Long-range corrected density functional theory with accelerated Hartree-Fock exchange integration using a two-Gaussian operator [LC-ωPBE(2Gau)].

    PubMed

    Song, Jong-Won; Hirao, Kimihiko

    2015-10-14

    Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.

  3. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    NASA Astrophysics Data System (ADS)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.

  4. Buried waste integrated demonstration human engineered control station. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-09-01

    This document describes the Human Engineered Control Station (HECS) project activities including the conceptual designs. The purpose of the HECS is to enhance the effectiveness and efficiency of remote retrieval by providing an integrated remote control station. The HECS integrates human capabilities, limitations, and expectations into the design to reduce the potential for human error, provides an easy system to learn and operate, provides an increased productivity, and reduces the ultimate investment in training. The overall HECS consists of the technology interface stations, supporting engineering aids, platform (trailer), communications network (broadband system), and collision avoidance system.

  5. Helicopter crashes related to oil and gas operations in the Gulf of Mexico.

    PubMed

    Baker, Susan P; Shanahan, Dennis F; Haaland, Wren; Brady, Joanne E; Li, Guohua

    2011-09-01

    The hazards inherent in flight operations in the Gulf of Mexico prompted investigation of the number and circumstances of crashes related to oil and gas operations in the region. The National Transportation Safety Board (NTSB) database was queried for helicopter crashes during 1983 through 2009 related to Gulf of Mexico oil or gas production. The crashes were identified based on word searches confirmed by a narrative statement indicating that the flight was related to oil or gas operations. During 1983-2009, the NTSB recorded a total of 178 helicopter crashes related to oil and gas operations in the Gulf of Mexico, with an average of 6.6 crashes per year (5.6 annually during 1983-1999 vs. 8.2 during 2000-2009). The crashes resulted in a total of 139 fatalities, including 41 pilots. Mechanical failure was the most common precipitating factor, accounting for 68 crashes (38%). Bad weather led to 29 crashes (16%), in which 40% of the 139 deaths occurred. Pilot error was cited by the NTSB in 83 crashes (47%). After crashes or emergency landings on water, 15 helicopters sank when flotation devices were not activated automatically or by pilots. Mechanical failure, non-activation of flotation, and pilot error are major problems to be addressed if crashes and deaths in this lethal environment are to be reduced.

  6. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  7. Human Error: The Stakes Are Raised.

    ERIC Educational Resources Information Center

    Greenberg, Joel

    1980-01-01

    Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)

  8. An undulator based soft x-ray source for microscopy on the Duke electron storage ring

    NASA Astrophysics Data System (ADS)

    Johnson, Lewis Elgin

    1998-09-01

    This dissertation describes the design, development, and installation of an undulator-based soft x-ray source on the Duke Free Electron Laser laboratory electron storage ring. Insertion device and soft x-ray beamline physics and technology are all discussed in detail. The Duke/NIST undulator is a 3.64-m long hybrid design constructed by the Brobeck Division of Maxwell Laboratories. Originally built for an FEL project at the National Institute of Standards and Technology, the undulator was acquired by Duke in 1992 for use as a soft x-ray source for the FEL laboratory. Initial Hall probe measurements on the magnetic field distribution of the undulator revealed field errors of more than 0.80%. Initial phase errors for the device were more than 11 degrees. Through a series of in situ and off-line measurements and modifications we have re-tuned the magnet field structure of the device to produce strong spectral characteristics through the 5th harmonic. A low operating K has served to reduce the effects of magnetic field errors on the harmonic spectral content. Although rms field errors remained at 0.75%, we succeeded in reducing phase errors to less than 5 degrees. Using trajectory simulations from magnetic field data, we have computed the spectral output given the interaction of the Duke storage ring electron beam and the NIST undulator. Driven by a series of concerns and constraints over maximum utility, personnel safety and funding, we have also constructed a unique front end beamline for the undulator. The front end has been designed for maximum throughput of the 1st harmonic around 40A in its standard mode of operation. The front end has an alternative mode of operation which transmits the 3rd and 5th harmonics. This compact system also allows for the extraction of some of the bend magnet produced synchrotron and transition radiation from the storage ring. As with any well designed front end system, it also provides excellent protection to personnel and to the storage ring. A diagnostic beamline consisting of a transmission grating spectrometer and scanning wire beam profile monitor was constructed to measure the spatial and spectral characteristics of the undulator radiation. Test of the system with a circulating electron beam has confirmed the magnetic and focusing properties of the undulator, and verified that it can be used without perturbing the orbit of the beam.

  9. [Safety management in pathology laboratory: from specimen handling to confirmation of reports].

    PubMed

    Minato, Hiroshi; Nojima, Takayuki; Nakano, Mariko; Yamazaki, Michiko

    2011-03-01

    Medical errors in pathological diagnosis give a huge amount of physical and psychological damage to patients as well as medical staffs. We discussed here how to avoid medical errors in surgical pathology laboratory through our experience. Handling of surgical specimens and diagnosing process requires intensive labor and involves many steps. Each hospital reports many kinds of accidents or incidents, however, many laboratories share common problems and each process has its specific risk for the certain error. We analyzed the problems in each process and concentrated on avoiding misaccessioning, mislabeling, and misreporting. We have made several changes in our system, such as barcode labels, digital images of all specimens, putting specimens in embedding cassettes directly on the endoscopic biopsied specimens, and using a multitissue control block as controls in immunohistochemistry. Some problems are still left behind, but we have reduced the errors by decreasing the number of artificial operation as much as possible. A pathological system recognizing the status of read or unread the pathological reports by clinician are now underconstruction. We also discussed about quality assurance of diagnosis, cooperation with clinicians and other comedical staffs, and organization and method. In order to operate riskless work, it is important for all the medical staffs to have common awareness of the problems, keeping careful observations, and sharing all the information in common. Incorporation of an organizational management tool such as ISO 15189 and utilizing PDCA cycle is also helpful for safety management and quality improvement of the laboratory.

  10. Cost-effectiveness of the U.S. Geological Survey stream-gaging program in Indiana

    USGS Publications Warehouse

    Stewart, J.A.; Miller, R.L.; Butch, G.K.

    1986-01-01

    Analysis of the stream gaging program in Indiana was divided into three phases. The first phase involved collecting information concerning the data need and the funding source for each of the 173 surface water stations in Indiana. The second phase used alternate methods to produce streamflow records at selected sites. Statistical models were used to generate stream flow data for three gaging stations. In addition, flow routing models were used at two of the sites. Daily discharges produced from models did not meet the established accuracy criteria and, therefore, these methods should not replace stream gaging procedures at those gaging stations. The third phase of the study determined the uncertainty of the rating and the error at individual gaging stations, and optimized travel routes and frequency of visits to gaging stations. The annual budget, in 1983 dollars, for operating the stream gaging program in Indiana is $823,000. The average standard error of instantaneous discharge for all continuous record gaging stations is 25.3%. A budget of $800,000 could maintain this level of accuracy if stream gaging stations were visited according to phase III results. A minimum budget of $790,000 is required to operate the gaging network. At this budget, the average standard error of instantaneous discharge would be 27.7%. A maximum budget of $1 ,000,000 was simulated in the analysis and the average standard error of instantaneous discharge was reduced to 16.8%. (Author 's abstract)

  11. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  12. Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Bryant, Larry

    2014-01-01

    Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.

  13. ALT space shuttle barometric altimeter altitude analysis

    NASA Technical Reports Server (NTRS)

    Killen, R.

    1978-01-01

    The accuracy was analyzed of the barometric altimeters onboard the space shuttle orbiter. Altitude estimates from the air data systems including the operational instrumentation and the developmental flight instrumentation were obtained for each of the approach and landing test flights. By comparing the barometric altitude estimates to altitudes derived from radar tracking data filtered through a Kalman filter and fully corrected for atmospheric refraction, the errors in the barometric altitudes were shown to be 4 to 5 percent of the Kalman altitudes. By comparing the altitude determined from the true atmosphere derived from weather balloon data to the altitude determined from the U.S. Standard Atmosphere of 1962, it was determined that the assumption of the Standard Atmosphere equations contributes roughly 75 percent of the total error in the baro estimates. After correcting the barometric altitude estimates using an average summer model atmosphere computed for the average latitude of the space shuttle landing sites, the residual error in the altitude estimates was reduced to less than 373 feet. This corresponds to an error of less than 1.5 percent for altitudes above 4000 feet for all flights.

  14. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  15. Design Considerations for a Dedicated Gravity Recovery Satellite Mission Consisting of Two Pairs of Satellites

    NASA Technical Reports Server (NTRS)

    Wiese, D. N.; Nerem, R. S.; Lemoine, F. G.

    2011-01-01

    Future satellite missions dedicated to measuring time-variable gravity will need to address the concern of temporal aliasing errors; i.e., errors due to high-frequency mass variations. These errors have been shown to be a limiting error source for future missions with improved sensors. One method of reducing them is to fly multiple satellite pairs, thus increasing the sampling frequency of the mission. While one could imagine a system architecture consisting of dozens of satellite pairs, this paper explores the more economically feasible option of optimizing the orbits of two pairs of satellites. While the search space for this problem is infinite by nature, steps have been made to reduce it via proper assumptions regarding some parameters and a large number of numerical simulations exploring appropriate ranges for other parameters. A search space originally consisting of 15 variables is reduced to two variables with the utmost impact on mission performance: the repeat period of both pairs of satellites (shown to be near-optimal when they are equal to each other), as well as the inclination of one of the satellite pairs (the other pair is assumed to be in a polar orbit). To arrive at this conclusion, we assume circular orbits, repeat groundtracks for both pairs of satellites, a 100-km inter-satellite separation distance, and a minimum allowable operational satellite altitude of 290 km based on a projected 10-year mission lifetime. Given the scientific objectives of determining time-variable hydrology, ice mass variations, and ocean bottom pressure signals with higher spatial resolution, we find that an optimal architecture consists of a polar pair of satellites coupled with a pair inclined at 72deg, both in 13-day repeating orbits. This architecture provides a 67% reduction in error over one pair of satellites, in addition to reducing the longitudinal striping to such a level that minimal post-processing is required, permitting a substantial increase in the spatial resolution of the gravity field products. It should be emphasized that given different sets of scientific objectives for the mission, or a different minimum allowable satellite altitude, different architectures might be selected.

  16. Identifying Human Factors Issues in Aircraft Maintenance Operations

    NASA Technical Reports Server (NTRS)

    Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)

    1995-01-01

    Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.

  17. Medical Error Avoidance in Intraoperative Neurophysiological Monitoring: The Communication Imperative.

    PubMed

    Skinner, Stan; Holdefer, Robert; McAuliffe, John J; Sala, Francesco

    2017-11-01

    Error avoidance in medicine follows similar rules that apply within the design and operation of other complex systems. The error-reduction concepts that best fit the conduct of testing during intraoperative neuromonitoring are forgiving design (reversibility of signal loss to avoid/prevent injury) and system redundancy (reduction of false reports by the multiplication of the error rate of tests independently assessing the same structure). However, error reduction in intraoperative neuromonitoring is complicated by the dichotomous roles (and biases) of the neurophysiologist (test recording and interpretation) and surgeon (intervention). This "interventional cascade" can be given as follows: test → interpretation → communication → intervention → outcome. Observational and controlled trials within operating rooms demonstrate that optimized communication, collaboration, and situational awareness result in fewer errors. Well-functioning operating room collaboration depends on familiarity and trust among colleagues. Checklists represent one method to initially enhance communication and avoid obvious errors. All intraoperative neuromonitoring supervisors should strive to use sufficient means to secure situational awareness and trusted communication/collaboration. Face-to-face audiovisual teleconnections may help repair deficiencies when a particular practice model disallows personal operating room availability. All supervising intraoperative neurophysiologists need to reject an insular or deferential or distant mindset.

  18. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    PubMed

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  19. Guidewire retention following central venous catheterisation: a human factors and safe design investigation.

    PubMed

    Horberry, Tim; Teng, Yi-Chun; Ward, James; Patil, Vishal; Clarkson, P John

    2014-01-01

    Central Venous Catheterisation (CVC) has occasionally been associated with cases of retained guidewires in patients after surgery. In theory, this is a completely avoidable complication; however, as with any human procedure, operator error leading to guidewires being occasionally retained cannot be fully eliminated. The work described here investigated the issue in an attempt to better understand it both from an operator and a systems perspective, and to ultimately recommend appropriate safe design solutions that reduce guidewire retention errors. Nine distinct methods were used: observations of the procedure, a literature review, interviewing CVC end-users, task analysis construction, CVC procedural audits, two human reliability assessments, usability heuristics and a comprehensive solution survey with CVC end-users. The three solutions that operators rated most highly, in terms of both practicality and effectiveness, were: making trainees better aware of the potential guidewire complications and strongly emphasising guidewire removal in CVC training, actively checking that the guidewire is present in the waste tray for disposal, and standardising purchase of central line sets so that differences that may affect chances of guidewire loss is minimised. Further work to eliminate/engineer out the possibility of guidewires being retained is proposed.

  20. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays.

    PubMed

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-26

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp

    Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular andmore » periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.« less

  2. An all-joint-control master device for single-port laparoscopic surgery robots.

    PubMed

    Shim, Seongbo; Kang, Taehun; Ji, Daekeun; Choi, Hyunseok; Joung, Sanghyun; Hong, Jaesung

    2016-08-01

    Robots for single-port laparoscopic surgery (SPLS) typically have all of their joints located inside abdomen during surgery, whereas with the da Vinci system, only the tip part of the robot arm is inserted and manipulated. A typical master device that controls only the tip with six degrees of freedom (DOFs) is not suitable for use with SPLS robots because of safety concerns. We designed an ergonomic six-DOF master device that can control all of the joints of an SPLS robot. We matched each joint of the master, the slave, and the human arm to decouple all-joint motions of the slave robot. Counterbalance masses were used to reduce operator fatigue. Mapping factors were determined based on kinematic analysis and were used to achieve all-joint control with minimal error at the tip of the slave robot. The proposed master device has two noteworthy features: efficient joint matching to the human arm to decouple each joint motion of the slave robot and accurate mapping factors, which can minimize the trajectory error of the tips between the master and the slave. We confirmed that the operator can manipulate the slave robot intuitively with the master device and that both tips have similar trajectories with minimal error.

  3. Surgical specimen handover from the operating theatre to laboratory-Can we improve patient safety by learning from aviation and other high-risk organisations?

    PubMed

    Brennan, Peter A; Brands, Marieke T; Caldwell, Lucy; Fonseca, Felipe Paiva; Turley, Nic; Foley, Susie; Rahimi, Siavash

    2018-02-01

    Essential communication between healthcare staff is considered one of the key requirements for both safety and quality care when patients are handed over from one clinical area to other. This is particularly important in environments such as the operating theatre and intensive care where mistakes can be devastating. Health care has learned from other high-risk organisations (HRO) such as aviation where the use of checklists and human factors awareness has virtually eliminated human error and mistakes. To our knowledge, little has been published around ways to improve pathology specimen handover following surgery, with pathology request forms often conveying the bare minimum of information to assist the laboratory staff. Furthermore, the request form might not warn staff about potential hazards. In this article, we provide a brief summary of the factors involved in human error and introduce a novel checklist that can be readily completed at the same time as the routine pathology request form. This additional measure enhances safety, can help to reduce processing and mislabelling errors and provides essential information in a structured way assisting both laboratory staff and pathologists when handling head and neck surgical specimens. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Impact of shorter wavelengths on optical quality for laws

    NASA Technical Reports Server (NTRS)

    Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.

    1993-01-01

    This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefront tilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline l.5 m CO2 LAWS and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometers. However, advanced sensing and control devices would be necessary to maintain on-orbit alignment. Optical tolerance for maintaining boresight stability would have to be tightened by a factor of 1.8 for a 2 micrometers system and by 3.6 for a 1 micrometers system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometers does not appear feasible.

  5. Impact of shorter wavelengths on optical quality for laws

    NASA Technical Reports Server (NTRS)

    Wissinger, Alan B.; Noll, Robert J.; Tsacoyeanes, James G.; Tausanovitch, Jeanette R.

    1993-01-01

    This study explores parametrically as a function of wavelength the degrading effects of several common optical aberrations (defocus, astigmatism, wavefronttilts, etc.), using the heterodyne mixing efficiency factor as the merit function. A 60 cm diameter aperture beam expander with an expansion ratio of 15:1 and a primary mirror focal ratio of f/2 was designed for the study. An HDOS copyrighted analysis program determined the value of merit function for various optical misalignments. With sensitivities provided by the analysis, preliminary error budget and tolerance allocations were made for potential optical wavefront errors and boresight errors during laser shot transit time. These were compared with the baseline 1.5 m CO2 laws and the optical fabrication state of the art (SOA) as characterized by the Hubble Space Telescope. Reducing wavelength and changing optical design resulted in optical quality tolerances within the SOA both at 2 and 1 micrometer. However, advanced sensing and control devices would be necessary to be tightened by a factory of 1.8 for a 2 micrometer system and by 3.6 for a 1 micrometer system relative to the baseline CO2 LAWS. Available SOA components could be used for operation at 2 micrometers but operation at 1 micrometer does not appear feasible.

  6. High-performance etching of multilevel phase-type Fresnel zone plates with large apertures

    NASA Astrophysics Data System (ADS)

    Guo, Chengli; Zhang, Zhiyu; Xue, Donglin; Li, Longxiang; Wang, Ruoqiu; Zhou, Xiaoguang; Zhang, Feng; Zhang, Xuejun

    2018-01-01

    To ensure the etching depth uniformity of large-aperture Fresnel zone plates (FZPs) with controllable depths, a combination of a point source ion beam with a dwell-time algorithm has been proposed. According to the obtained distribution of the removal function, the latter can be used to optimize the etching time matrix by minimizing the root-mean-square error between the simulation results and the design value. Owing to the convolution operation in the utilized algorithm, the etching depth error is insensitive to the etching rate fluctuations of the ion beam, thereby reducing the requirement for the etching stability of the ion system. As a result, a 4-level FZP with a circular aperture of 300 mm was fabricated. The obtained results showed that the etching depth uniformity of the full aperture could be reduced to below 1%, which was sufficiently accurate for meeting the use requirements of FZPs. The proposed etching method may serve as an alternative way of etching high-precision diffractive optical elements with large apertures.

  7. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  8. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  9. Wave field synthesis of a virtual source located in proximity to a loudspeaker array.

    PubMed

    Lee, Jung-Min; Choi, Jung-Woo; Kim, Yang-Hann

    2013-09-01

    For the derivation of 2.5-dimensional operator in wave field synthesis, a virtual source is assumed to be positioned far from a loudspeaker array. However, such far-field approximation inevitably results in a reproduction error when the virtual source is placed adjacent to an array. In this paper, a method is proposed to generate a virtual source close to and behind a continuous line array of loudspeakers. A driving function is derived by reducing a surface integral (Rayleigh integral) to a line integral based on the near-field assumption. The solution is then combined with the far-field formula of wave field synthesis by introducing a weighting function that can adjust the near- and far-field contribution of each driving function. This enables production of a virtual source anywhere in relation to the array. Simulations show the proposed method can reduce the reproduction error to below -18 dB, regardless of the virtual source position.

  10. Symmetry boost of the fidelity of Shor factoring

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2018-05-01

    In Shor's algorithm quantum subroutines occur with the structure F U F-1 , where F is a unitary transform and U is performing a quantum computation. Examples are quantum adders and subunits of quantum modulo adders. In this paper we show, both analytically and numerically, that if, in analogy to spin echoes, F and F-1 can be implemented symmetrically when executing Shor's algorithm on actual, imperfect quantum hardware, such that F and F-1 have the same hardware errors, a symmetry boost in the fidelity of the combined F U F-1 quantum operation results when compared to the case in which the errors in F and F-1 are independently random. Running the complete gate-by-gate implemented Shor algorithm, we show that the symmetry-induced fidelity boost can be as large as a factor 4. While most of our analytical and numerical results concern the case of over- and under-rotation of controlled rotation gates, in the numerically accessible case of Shor's algorithm with a small number of qubits, we show explicitly that the symmetry boost is robust with respect to more general types of errors. While, expectedly, additional error types reduce the symmetry boost, we show explicitly, by implementing general off-diagonal SU (N ) errors (N =2 ,4 ,8 ), that the boost factor scales like a Lorentzian in δ /σ , where σ and δ are the error strengths of the diagonal over- and underrotation errors and the off-diagonal SU (N ) errors, respectively. The Lorentzian shape also shows that, while the boost factor may become small with increasing δ , it declines slowly (essentially like a power law) and is never completely erased. We also investigate the effect of diagonal nonunitary errors, which, in analogy to unitary errors, reduce but never erase the symmetry boost. Going beyond the case of small quantum processors, we present analytical scaling results that show that the symmetry boost persists in the practically interesting case of a large number of qubits. We illustrate this result explicitly for the case of Shor factoring of the semiprime RSA-1024, where, analytically, focusing on over- and underrotation errors, we obtain a boost factor of about 10. In addition, we provide a proof of the fidelity product formula, including its range of applicability.

  11. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  12. Using warnings to reduce categorical false memories in younger and older adults.

    PubMed

    Carmichael, Anna M; Gutchess, Angela H

    2016-07-01

    Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.

  13. An examination of the operational error database for air route traffic control centers.

    DOT National Transportation Integrated Search

    1993-12-01

    Monitoring the frequency and determining the causes of operational errors - defined as the loss of prescribed separation between aircraft - is one approach to assessing the operational safety of the air traffic control system. The Federal Aviation Ad...

  14. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  15. Anemia Causes Hypoglycemia in Intensive Care Unit Patients Due to Error in Single-Channel Glucometers: Methods of Reducing Patient Risk

    DTIC Science & Technology

    2010-01-01

    hematocrit, low oxygen tension, acetaminophen, uric acid , ascorbic acid , maltose, galactose, xy- lose, lactose, operator inexperience, age of strips, heat...Biomedical, Waltham, MA) that corrects for the effects of anemia, low oxygen tension, acetaminophen, uric acid , ascorbic acid , maltose, galactose, xylose, and...resulted in inappropriately high glucometer values (data not shown). The effects of interfering substances (acetaminophen, uric acid , ascorbic acid

  16. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    USGS Publications Warehouse

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  17. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  18. How Distinctive Processing Enhances Hits and Reduces False Alarms

    PubMed Central

    Hunt, R. Reed; Smith, Rebekah E.

    2015-01-01

    Distinctive processing is a concept designed to account for precision in memory, both correct responses and avoidance of errors. The principal question addressed in two experiments is how distinctive processing of studied material reduces false alarms to familiar distractors. Jacoby (Jacoby, Kelley, & McElree, 1999) has used the metaphors early selection and late correction to describe two different types of control processes. Early selection refers to limitations on access whereas late correction describes controlled monitoring of accessed information. The two types of processes are not mutually exclusive, and previous research has provided evidence for the operation of both. The data reported here extend previous work to a criterial recollection paradigm and to a recognition memory test. The results of both experiments show that variables that reduce false memory for highly familiar distracters continue to exert their effect under conditions of minimal post-access monitoring. Level of monitoring was reduced in the first experiment through test instructions and in the second experiment through speeded test responding. The results were consistent with the conclusion that both early selection and late correction operate to control accuracy in memory. PMID:26034343

  19. Psychophysiological and other factors affecting human performance in accident prevention and investigation. [Comparison of aviation with other industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klinestiver, L.R.

    Psychophysiological factors are not uncommon terms in the aviation incident/accident investigation sequence where human error is involved. It is highly suspect that the same psychophysiological factors may also exist in the industrial arena where operator personnel function; but, there is little evidence in literature indicating how management and subordinates cope with these factors to prevent or reduce accidents. It is apparent that human factors psychophysological training is quite evident in the aviation industry. However, while the industrial arena appears to analyze psychophysiological factors in accident investigations, there is little evidence that established training programs exist for supervisors and operator personnel.

  20. System performance conclusions

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.

    1980-01-01

    The advantages and disadvantages of reducing power levels and using antennas with diameters smaller than 1 Km were evaluated if rectenna costs and land usage requirements become major factors, operating at 5800 megahertz should be considered. Three sequences (random, incoherent phasing, and concentric rings - center to edge) provided satisfactory performance in that the resultant sidelobe levels during startup/ shutdown were lower than the steady-state levels present during normal operations. Grating lobe peaks and scattered power levels were used to determine the array/subarray mechanical alignment requirements. The antenna alignment requirement is 1 min or 3 min depending on phase control configuration. System error parameters were defined to minimize scattered microwave power.

  1. Mistake proofing: changing designs to reduce error

    PubMed Central

    Grout, J R

    2006-01-01

    Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609

  2. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  3. Optimal subsystem approach to multi-qubit quantum state discrimination and experimental investigation

    NASA Astrophysics Data System (ADS)

    Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun

    2018-02-01

    Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.

  4. Unbiased Taxonomic Annotation of Metagenomic Samples

    PubMed Central

    Fosso, Bruno; Pesole, Graziano; Rosselló, Francesc

    2018-01-01

    Abstract The classification of reads from a metagenomic sample using a reference taxonomy is usually based on first mapping the reads to the reference sequences and then classifying each read at a node under the lowest common ancestor of the candidate sequences in the reference taxonomy with the least classification error. However, this taxonomic annotation can be biased by an imbalanced taxonomy and also by the presence of multiple nodes in the taxonomy with the least classification error for a given read. In this article, we show that the Rand index is a better indicator of classification error than the often used area under the receiver operating characteristic (ROC) curve and F-measure for both balanced and imbalanced reference taxonomies, and we also address the second source of bias by reducing the taxonomic annotation problem for a whole metagenomic sample to a set cover problem, for which a logarithmic approximation can be obtained in linear time and an exact solution can be obtained by integer linear programming. Experimental results with a proof-of-concept implementation of the set cover approach to taxonomic annotation in a next release of the TANGO software show that the set cover approach further reduces ambiguity in the taxonomic annotation obtained with TANGO without distorting the relative abundance profile of the metagenomic sample. PMID:29028181

  5. A staggered-grid finite-difference scheme optimized in the time–space domain for modeling scalar-wave propagation in geophysical problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov

    For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less

  6. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    PubMed

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  7. A Novel Approach to Realize of All Optical Frequency Encoded Dibit Based XOR and XNOR Logic Gates Using Optical Switches with Simulated Verification

    NASA Astrophysics Data System (ADS)

    Ghosh, B.; Hazra, S.; Haldar, N.; Roy, D.; Patra, S. N.; Swarnakar, J.; Sarkar, P. P.; Mukhopadhyay, S.

    2018-03-01

    Since last few decades optics has already proved its strong potentiality for conducting parallel logic, arithmetic and algebraic operations due to its super-fast speed in communication and computation. So many different logical and sequential operations using all optical frequency encoding technique have been proposed by several authors. Here, we have keened out all optical dibit representation technique, which has the advantages of high speed operation as well as reducing the bit error problem. Exploiting this phenomenon, we have proposed all optical frequency encoded dibit based XOR and XNOR logic gates using the optical switches like add/drop multiplexer (ADM) and reflected semiconductor optical amplifier (RSOA). Also the operations of these gates have been verified through proper simulation using MATLAB (R2008a).

  8. Time-dependent phase error correction using digital waveform synthesis

    DOEpatents

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  9. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  10. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  11. Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2014-01-01

    In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.

  12. A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Reale, O.; Atlas, R.; Jusem, J. C.

    2004-01-01

    Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.

  13. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  14. Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy

    NASA Technical Reports Server (NTRS)

    Edwards, Lawrence G.; Haberbusch, Mark

    1993-01-01

    The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.

  15. Automated drug dispensing system reduces medication errors in an intensive care setting.

    PubMed

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  16. Utility of NCEP Operational and Emerging Meteorological Models for Driving Air Quality Prediction

    NASA Astrophysics Data System (ADS)

    McQueen, J.; Huang, J.; Huang, H. C.; Shafran, P.; Lee, P.; Pan, L.; Sleinkofer, A. M.; Stajner, I.; Upadhayay, S.; Tallapragada, V.

    2017-12-01

    Operational air quality predictions for the United States (U. S.) are provided at NOAA by the National Air Quality Forecasting Capability (NAQFC). NAQFC provides nationwide operational predictions of ozone and particulate matter twice per day (at 06 and 12 UTC cycles) at 12 km resolution and 1 hour time intervals through 48 hours and distributed at http://airquality.weather.gov. The NOAA National Centers for Environmental Prediction (NCEP) operational North American Mesoscale (NAM) 12 km weather prediction is used to drive the Community Multiscale Air Quality (CMAQ) model. In 2017, the NAM was upgraded in part to reduce a warm 2m temperature bias in Summer (V4). At the same time CMAQ was updated to V5.0.2. Both versions of the models were run in parallel for several months. Therefore the impact of improvements from the atmospheric chemistry model versus upgrades with the weather prediction model could be assessed. . Improvements to CMAQ were related to improvements to improvements in NAM 2 m temperature bias through increasing the opacity of clouds and reducing downward shortwave radiation resulted in reduced ozone photolysis. Higher resolution operational NWP models have recently been introduced as part of the NCEP modeling suite. These include the NAM CONUS Nest (3 km horizontal resolution) run four times per day through 60 hours and the High Resolution Rapid Refresh (HRRR, 3 km) run hourly out to 18 hours. In addition, NCEP with other NOAA labs has begun to develop and test the Next Generation Global Prediction System (NGGPS) based on the FV3 global model. This presentation also overviews recent developments with operational numerical weather prediction and evaluates the ability of these models for predicting low level temperatures, clouds and capturing boundary layer processes important for driving air quality prediction in complex terrain. The assessed meteorological model errors could help determine the magnitude of possible pollutant errors from CMAQ if used for driving meteorology. The NWP models will be evaluated against standard and mesonet fields averaged for various regions during the summer 2017. An evaluation of meteorological fields important to air quality modeling (eg: near surface winds, temperatures, moisture and boundary layer heights, cloud cover) will be reported on.

  17. Research on the Error Characteristics of a 110 kV Optical Voltage Transformer under Three Conditions: In the Laboratory, Off-Line in the Field and During On-Line Operation.

    PubMed

    Xiao, Xia; Hu, Haoliang; Xu, Yan; Lei, Min; Xiong, Qianzhu

    2016-08-16

    Optical voltage transformers (OVTs) have been applied in power systems. When performing accuracy performance tests of OVTs large differences exist between the electromagnetic environment and the temperature variation in the laboratory and on-site. Therefore, OVTs may display different error characteristics under different conditions. In this paper, OVT prototypes with typical structures were selected to be tested for the error characteristics with the same testing equipment and testing method. The basic accuracy, the additional error caused by temperature and the adjacent phase in the laboratory, the accuracy in the field off-line, and the real-time monitoring error during on-line operation were tested. The error characteristics under the three conditions-laboratory, in the field off-line and during on-site operation-were compared and analyzed. The results showed that the effect of the transportation process, electromagnetic environment and the adjacent phase on the accuracy of OVTs could be ignored for level 0.2, but the error characteristics of OVTs are dependent on the environmental temperature and are sensitive to the temperature gradient. The temperature characteristics during on-line operation were significantly superior to those observed in the laboratory.

  18. Historical shoreline mapping (I): improving techniques and reducing positioning errors

    USGS Publications Warehouse

    Thieler, E. Robert; Danforth, William W.

    1994-01-01

    A critical need exists among coastal researchers and policy-makers for a precise method to obtain shoreline positions from historical maps and aerial photographs. A number of methods that vary widely in approach and accuracy have been developed to meet this need. None of the existing methods, however, address the entire range of cartographic and photogrammetric techniques required for accurate coastal mapping. Thus, their application to many typical shoreline mapping problems is limited. In addition, no shoreline mapping technique provides an adequate basis for quantifying the many errors inherent in shoreline mapping using maps and air photos. As a result, current assessments of errors in air photo mapping techniques generally (and falsely) assume that errors in shoreline positions are represented by the sum of a series of worst-case assumptions about digitizer operator resolution and ground control accuracy. These assessments also ignore altogether other errors that commonly approach ground distances of 10 m. This paper provides a conceptual and analytical framework for improved methods of extracting geographic data from maps and aerial photographs. We also present a new approach to shoreline mapping using air photos that revises and extends a number of photogrammetric techniques. These techniques include (1) developing spatially and temporally overlapping control networks for large groups of photos; (2) digitizing air photos for use in shoreline mapping; (3) preprocessing digitized photos to remove lens distortion and film deformation effects; (4) simultaneous aerotriangulation of large groups of spatially and temporally overlapping photos; and (5) using a single-ray intersection technique to determine geographic shoreline coordinates and express the horizontal and vertical error associated with a given digitized shoreline. As long as historical maps and air photos are used in studies of shoreline change, there will be a considerable amount of error (on the order of several meters) present in shoreline position and rate-of- change calculations. The techniques presented in this paper, however, provide a means to reduce and quantify these errors so that realistic assessments of the technological noise (as opposed to geological noise) in geographic shoreline positions can be made.

  19. Applying Lean Sigma solutions to mistake-proof the chemotherapy preparation process.

    PubMed

    Aboumatar, Hanan J; Winner, Laura; Davis, Richard; Peterson, Aisha; Hill, Richard; Frank, Susan; Almuete, Virna; Leung, T Vivian; Trovitch, Peter; Farmer, Denise

    2010-02-01

    Errors related to high-alert medications, such as chemotherapeutic agents, have resulted in serious adverse events. A fast-paced application of Lean Sigma methodology was used to safeguard the chemotherapy preparation process against errors and increase compliance with United States Pharmacopeia 797 (USP 797) regulations. On Days 1 and 2 of a Lean Sigma workshop, frontline staff studied the chemotherapy preparation process. During Days 2 and 3, interventions were developed and implementation was started. The workshop participants were satisfied with the speed at which improvements were put to place using the structured workshop format. The multiple opportunities for error identified related to the chemotherapy preparation process, workspace layout, distractions, increased movement around ventilated hood areas, and variation in medication processing and labeling procedures. Mistake-proofing interventions were then introduced via workspace redesign, process redesign, and development of standard operating procedures for pharmacy staff. Interventions were easy to implement and sustainable. Reported medication errors reaching patients and requiring monitoring decreased, whereas the number of reported near misses increased, suggesting improvement in identifying errors before reaching the patients. Application of Lean Sigma solutions enabled the development of a series of relatively inexpensive and easy to implement mistake-proofing interventions that reduce the likelihood of chemotherapy preparation errors and increase compliance with USP 797 regulations. The findings and interventions are generalizable and can inform mistake-proofing interventions in all types of pharmacies.

  20. Determining relative error bounds for the CVBEM

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.

  1. Reduction of Orifice-Induced Pressure Errors

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.

    1987-01-01

    Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.

  2. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  3. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    PubMed

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  4. Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations

    NASA Astrophysics Data System (ADS)

    Linders, Viktor; Kupiainen, Marco; Nordström, Jan

    2017-07-01

    We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.

  5. Editing disulphide bonds: error correction using redox currencies.

    PubMed

    Ito, Koreaki

    2010-01-01

    The disulphide bond-introducing enzyme of bacteria, DsbA, sometimes oxidizes non-native cysteine pairs. DsbC should rearrange the resulting incorrect disulphide bonds into those with correct connectivity. DsbA and DsbC receive oxidizing and reducing equivalents, respectively, from respective redox components (quinones and NADPH) of the cell. Two mechanisms of disulphide bond rearrangement have been proposed. In the redox-neutral 'shuffling' mechanism, the nucleophilic cysteine in the DsbC active site forms a mixed disulphide with a substrate and induces disulphide shuffling within the substrate part of the enzyme-substrate complex, followed by resolution into a reduced enzyme and a disulphide-rearranged substrate. In the 'reduction-oxidation' mechanism, DsbC reduces those substrates with wrong disulphides so that DsbA can oxidize them again. In this issue of Molecular Microbiology, Berkmen and his collaborators show that a disulphide reductase, TrxP, from an anaerobic bacterium can substitute for DsbC in Escherichia coli. They propose that the reduction-oxidation mechanism of disulphide rearrangement can indeed operate in vivo. An implication of this work is that correcting errors in disulphide bonds can be coupled to cellular metabolism and is conceptually similar to the proofreading processes observed with numerous synthesis and maturation reactions of biological macromolecules.

  6. Analysis of the U.S. geological survey streamgaging network

    USGS Publications Warehouse

    Scott, A.G.

    1987-01-01

    This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U.S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3,493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19.9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17.8 percent. The existing streamgaging networks in four Districts were further analyzed to determine the impacts that satellite telemetry would have on the cost effectiveness. Satellite telemetry was not found to be cost effective on the basis of hydrologic data collection alone, given present cost of equipment and operation.This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U. S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3, 493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19. 9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17. 8 percent. Additional study results are discussed.

  7. The use of precise ephemerides, ionospheric data, and corrected antenna coordinates in a long-distance GPS time transfer

    NASA Technical Reports Server (NTRS)

    Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine; Weiss, Marc A.

    1990-01-01

    Over intercontinental distances, the accuracy of The Global Positioning System (GPS) time transfers ranges from 10 to 20 ns. The principal error sources are the broadcast ionospheric model, the broadcast ephemerides and the local antenna coordinates. For the first time, the three major error sources for GPS time transfer can be reduced simultaneously for a particular time link. Ionospheric measurement systems of the National Institute of Standards and Technology (NIST) type are now operating on a regular basis at the National Institute of Standards and Technology in Boulder and at the Paris Observatory in Paris. Broadcast ephemerides are currently recorded for time-transfer tracks between these sites, this being necessary for using precise ephemerides. At last, corrected local GPS antenna coordinates are now introduced in GPS receivers at both sites. Shown here is the improvement in precision for this long-distance time comparison resulting from the reduction of these three error sources.

  8. Passing the Baton: An Experimental Study of Shift Handover

    NASA Technical Reports Server (NTRS)

    Parke, Bonny; Hobbs, Alan; Kanki, Barbara

    2010-01-01

    Shift handovers occur in many safety-critical environments, including aviation maintenance, medicine, air traffic control, and mission control for space shuttle and space station operations. Shift handovers are associated with increased risk of communication failures and human error. In dynamic industries, errors and accidents occur disproportionately after shift handover. Typical shift handovers involve transferring information from an outgoing shift to an incoming shift via written logs, or in some cases, face-to-face briefings. The current study explores the possibility of improving written communication with the support modalities of audio and video recordings, as well as face-to-face briefings. Fifty participants participated in an experimental task which mimicked some of the critical challenges involved in transferring information between shifts in industrial settings. All three support modalities, face-to-face, video, and audio recordings, reduced task errors significantly over written communication alone. The support modality most preferred by participants was face-to-face communication; the least preferred was written communication alone.

  9. Experiments with explicit filtering for LES using a finite-difference method

    NASA Technical Reports Server (NTRS)

    Lund, T. S.; Kaltenbach, H. J.

    1995-01-01

    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.

  10. Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.

  11. Error Characterization and Mitigation for 16Nm MLC NAND Flash Memory Under Total Ionizing Dose Effect

    NASA Technical Reports Server (NTRS)

    Li, Yue (Inventor); Bruck, Jehoshua (Inventor)

    2018-01-01

    A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.

  12. Errors as a Means of Reducing Impulsive Food Choice.

    PubMed

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-06-05

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.

  13. Errors as a Means of Reducing Impulsive Food Choice

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-01-01

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities. PMID:27341281

  14. High-speed clock recovery unit based on a phase aligner

    NASA Astrophysics Data System (ADS)

    Tejera, Efrain; Esper-Chain, Roberto; Tobajas, Felix; De Armas, Valentin; Sarmiento, Roberto

    2003-04-01

    Nowadays clock recovery units are key elements in high speed digital communication systems. For an efficient operation, this units should generate a low jitter clock based on the NRZ received data, and be tolerant to long absence of transitions. Architectures based on Hogge phase detectors have been widely used, nevertheless, they are very sensitive to jitter of the received data and they have a limited tolerance to the absence of transitions. This paper shows a novel high speed clock recovery unit based on a phase aligner. The system allows a very fast clock recovery with a low jitter, moreover, it is very resistant to absence of transitions. The design is based on eight phases obtained from a reference clock running at the nominal frequency of the received signal. This high speed reference clock is generated using a crystal and a clock multiplier unit. The phase alignment system chooses, as starting point, the two phases closest to the data phase. This allows a maximum error of 45 degrees between the clock and data signal phases. Furthermore, the system includes a feed-back loop that interpolates the chosen phases to reduce the phase error to zero. Due to the high stability and reduced tolerance of the local reference clock, the jitter obtained is highly reduced and the system becomes able to operate under long absence of transitions. This performances make this design suitable for systems such as high speed serial link technologies. This system has been designed for CMOS 0.25μm at 1.25GHz and has been verified through HSpice simulations.

  15. Minimizing the Disruptive Effects of Prospective Memory in Simulated Air Traffic Control

    PubMed Central

    Loft, Shayne; Smith, Rebekah E.; Remington, Roger

    2015-01-01

    Prospective memory refers to remembering to perform an intended action in the future. Failures of prospective memory can occur in air traffic control. In two experiments, we examined the utility of external aids for facilitating air traffic management in a simulated air traffic control task with prospective memory requirements. Participants accepted and handed-off aircraft and detected aircraft conflicts. The prospective memory task involved remembering to deviate from a routine operating procedure when accepting target aircraft. External aids that contained details of the prospective memory task appeared and flashed when target aircraft needed acceptance. In Experiment 1, external aids presented either adjacent or non-adjacent to each of the 20 target aircraft presented over the 40min test phase reduced prospective memory error by 11% compared to a condition without external aids. In Experiment 2, only a single target aircraft was presented a significant time (39min–42min) after presentation of the prospective memory instruction, and the external aids reduced prospective memory error by 34%. In both experiments, costs to the efficiency of non-prospective memory air traffic management (non-target aircraft acceptance response time, conflict detection response time) were reduced by non-adjacent aids compared to no aids or adjacent aids. In contrast, in both experiments, the efficiency of the prospective memory air traffic management (target aircraft acceptance response time) was facilitated by adjacent aids compared to non-adjacent aids. Together, these findings have potential implications for the design of automated alerting systems to maximize multi-task performance in work settings where operators monitor and control demanding perceptual displays. PMID:24059825

  16. Bit Error Ratio Test Equipment for High Speed Vertical Cavity Transistor Laser and MicroCavity VCSEL and Photo Receiver

    DTIC Science & Technology

    2015-08-31

    Ratio Test Equipment for High Speed Vertical Cavity Transistor Laser & MicroCavity VCSEL and Photo Receiver The views, opinions and/or findings...suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis...for High Speed Vertical Cavity Transistor Laser & MicroCavity VCSEL and Photo Receiver Report Title In the previous DURIP award (W911NF-13-1-0287

  17. Validation of published Stirling engine design methods using engine characteristics from the literature

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1980-01-01

    Four fully disclosed reference engines and five design methods are discussed. So far, the agreement between theory and experiment is about as good for the simpler calculation methods as it is for the more complicated methods, that is, within 20%. For the simpler methods, a one number adjustable constant can be used to reduce the error in predicting power output and efficiency over the entire operating map to less than 10%.

  18. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  19. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  20. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  1. Risk management: correct patient and specimen identification in a surgical pathology laboratory. The experience of Infermi Hospital, Rimini, Italy.

    PubMed

    Fabbretti, G

    2010-06-01

    Because of its complex nature, surgical pathology practice is prone to error. In this report, we describe our methods for reducing error as much as possible during the pre-analytical and analytical phases. This was achieved by revising procedures, and by using computer technology and automation. Most mistakes are the result of human error in the identification and matching of patient and samples. To avoid faulty data interpretation, we employed a new comprehensive computer system that acquires all patient ID information directly from the hospital's database with a remote order entry; it also provides label and request forms via-Web where clinical information is required before sending the sample. Both patient and sample are identified directly and immediately at the site where the surgical procedures are performed. Barcode technology is used to input information at every step and automation is used for sample blocks and slides to avoid errors that occur when information is recorded or transferred by hand. Quality control checks occur at every step of the process to ensure that none of the steps are left to chance and that no phase is dependent on a single operator. The system also provides statistical analysis of errors so that new strategies can be implemented to avoid repetition. In addition, the staff receives frequent training on avoiding errors and new developments. The results have been shown promising results with a very low error rate (0.27%). None of these compromised patient health and all errors were detected before the release of the diagnosis report.

  2. Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine

    NASA Astrophysics Data System (ADS)

    Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo

    2017-07-01

    Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.

  3. [Improvement of team competence in the operating room : Training programs from aviation].

    PubMed

    Schmidt, C E; Hardt, F; Möller, J; Malchow, B; Schmidt, K; Bauer, M

    2010-08-01

    Growing attention has been drawn to patient safety during recent months due to media reports of clinical errors. To date only clinical incident reporting systems have been implemented in acute care hospitals as instruments of risk management. However, these systems only have a limited impact on human factors which account for the majority of all errors in medicine. Crew resource management (CRM) starts here. For the commissioning of a new hospital in Minden, training programs were installed in order to maintain patient safety in a new complex environment. The training was planned in three parts: All relevant processes were defined as standard operating procedures (SOP), visualized and then simulated in the new building. In addition, staff members (trainers) in leading positions were trained in CRM in order to train the complete staff. The training programs were analyzed by questionnaires. Selection of topics, relevance for practice and mode of presentation were rated as very good by 73% of the participants. The staff members ranked the topics communication in crisis situations, individual errors and compensating measures as most important followed by case studies and teamwork. Employees improved in compliance to the SOP, team competence and communication. In high technology environments with escalating workloads and interdisciplinary organization, staff members are confronted with increasing demands in knowledge and skills. To reduce errors under such working conditions relevant processes should be standardized and trained for the emergency situation. Human performance can be supported by well-trained interpersonal skills which are evolved in CRM training. In combination these training programs make a significant contribution to maintaining patient safety.

  4. Recovery of Bennu's orientation for the OSIRIS-REx mission: implications for the spin state accuracy and geolocation errors

    NASA Astrophysics Data System (ADS)

    Mazarico, Erwan; Rowlands, David D.; Sabaka, Terence J.; Getzandanner, Kenneth M.; Rubincam, David P.; Nicholas, Joseph B.; Moreau, Michael C.

    2017-10-01

    The goal of the OSIRIS-REx mission is to return a sample of asteroid material from near-Earth asteroid (101955) Bennu. The role of the navigation and flight dynamics team is critical for the spacecraft to execute a precisely planned sampling maneuver over a specifically selected landing site. In particular, the orientation of Bennu needs to be recovered with good accuracy during orbital operations to contribute as small an error as possible to the landing error budget. Although Bennu is well characterized from Earth-based radar observations, its orientation dynamics are not sufficiently known to exclude the presence of a small wobble. To better understand this contingency and evaluate how well the orientation can be recovered in the presence of a large 1° wobble, we conduct a comprehensive simulation with the NASA GSFC GEODYN orbit determination and geodetic parameter estimation software. We describe the dynamic orientation modeling implemented in GEODYN in support of OSIRIS-REx operations and show how both altimetry and imagery data can be used as either undifferenced (landmark, direct altimetry) or differenced (image crossover, altimetry crossover) measurements. We find that these two different types of data contribute differently to the recovery of instrument pointing or planetary orientation. When upweighted, the absolute measurements help reduce the geolocation errors, despite poorer astrometric (inertial) performance. We find that with no wobble present, all the geolocation requirements are met. While the presence of a large wobble is detrimental, the recovery is still reliable thanks to the combined use of altimetry and imagery data.

  5. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  6. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    PubMed

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Assimilation of a knowledge base and physical models to reduce errors in passive-microwave classifications of sea ice

    NASA Technical Reports Server (NTRS)

    Maslanik, J. A.; Key, J.

    1992-01-01

    An expert system framework has been developed to classify sea ice types using satellite passive microwave data, an operational classification algorithm, spatial and temporal information, ice types estimated from a dynamic-thermodynamic model, output from a neural network that detects the onset of melt, and knowledge about season and region. The rule base imposes boundary conditions upon the ice classification, modifies parameters in the ice algorithm, determines a `confidence' measure for the classified data, and under certain conditions, replaces the algorithm output with model output. Results demonstrate the potential power of such a system for minimizing overall error in the classification and for providing non-expert data users with a means of assessing the usefulness of the classification results for their applications.

  8. [Risk management project: reactive or proactive approach?].

    PubMed

    Vastola, Pasquale; Saracino, Donato M T

    2006-01-01

    Risk management in healthcare refers to the process of developing strategies aimed at preventing and controlling the risk of occurrence of errors and harmful events. The final objective is primarily that of increasing patient safety and secondarily, that of reducing the financial burden of adverse events. The implementation of a risk management system is therefore of vital strategic importance. Nevertheless, a fundamental question that needs to be answered in the operational phase is: should a proactive or reactive approach to risk management be taken? In our view, proactive risk management has many advantages over a reactive approach and is therefore preferable. The reactive approach should be taken exclusively to obtain information regarding risk and errors, in the preliminary, as well as monitoring and follow-up phases of the project.

  9. [Ultrasonic scissors. New vs resterilized instruments].

    PubMed

    Gärtner, D; Münz, K; Hückelheim, E; Hesse, U

    2008-02-01

    The aim of this study was to compare reliability in handling and function of resterilized and single-use disposable ultrasonic scissors. In a prospective randomized study, the surgeon blindly tested new and resterilized ultrasonographic scissors. The parameters were force of activation, cutting effect, coagulation effect, error messages, and disturbing generator noise. Fifty-one new and 49 resterilized instruments in 94 operations were evaluated. The differences in force of activation, cutting effect, and coagulation were not significant. Error messages and disturbing noises were rare in both groups. Six new instruments and two resterilized instruments had to be exchanged because of problems during surgery. This study demonstrates comparable reliability in function and handling of resterilized and new ultrasonic scissors. The use of resterilized instruments leads to distinctly reduced costs and could contribute to efficiency in laparoscopic surgery.

  10. Lessons from Crew Resource Management for Cardiac Surgeons.

    PubMed

    Marvil, Patrick; Tribble, Curt

    2017-04-30

    Crew resource management (CRM) describes a system developed in the late 1970s in response to a series of deadly commercial aviation crashes. This system has been universally adopted in commercial and military aviation and is now an integral part of aviation culture. CRM is an error mitigation strategy developed to reduce human error in situations in which teams operate in complex, high-stakes environments. Over time, the principles of this system have been applied and utilized in other environments, particularly in medical areas dealing with high-stakes outcomes requiring optimal teamwork and communication. While the data from formal studies on the effectiveness of formal CRM training in medical environments have reported mixed results, it seems clear that some of these principles should have value in the practice of cardiovascular surgery.

  11. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  12. Error or "act of God"? A study of patients' and operating room team members' perceptions of error definition, reporting, and disclosure.

    PubMed

    Espin, Sherry; Levinson, Wendy; Regehr, Glenn; Baker, G Ross; Lingard, Lorelei

    2006-01-01

    Calls abound for a culture change in health care to improve patient safety. However, effective change cannot proceed without a clear understanding of perceptions and beliefs about error. In this study, we describe and compare operative team members' and patients' perceptions of error, reporting of error, and disclosure of error. Thirty-nine interviews of team members (9 surgeons, 9 nurses, 10 anesthesiologists) and patients (11) were conducted at 2 teaching hospitals using 4 scenarios as prompts. Transcribed responses to open questions were analyzed by 2 researchers for recurrent themes using the grounded-theory method. Yes/no answers were compared across groups using chi-square analyses. Team members and patients agreed on what constitutes an error. Deviation from standards and negative outcome were emphasized as definitive features. Patients and nurse professionals differed significantly in their perception of whether errors should be reported. Nurses were willing to report only events within their disciplinary scope of practice. Although most patients strongly advocated full disclosure of errors (what happened and how), team members preferred to disclose only what happened. When patients did support partial disclosure, their rationales varied from that of team members. Both operative teams and patients define error in terms of breaking the rules and the concept of "no harm no foul." These concepts pose challenges for treating errors as system failures. A strong culture of individualism pervades nurses' perception of error reporting, suggesting that interventions are needed to foster collective responsibility and a constructive approach to error identification.

  13. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  14. Expected accuracy of proximal and distal temperature estimated by wireless sensors, in relation to their number and position on the skin.

    PubMed

    Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni; Montagnese, Sara

    2017-01-01

    A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints.

  15. Expected accuracy of proximal and distal temperature estimated by wireless sensors, in relation to their number and position on the skin

    PubMed Central

    Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R.; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni

    2017-01-01

    A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints. PMID:28666029

  16. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  17. Reducing diagnostic errors in medicine: what's the goal?

    PubMed

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  18. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  19. The problem of automation: Inappropriate feedback and interaction, not overautomation

    NASA Technical Reports Server (NTRS)

    Norman, Donald A.

    1989-01-01

    As automation increasingly takes its place in industry, especially high-risk industry, it is often blamed for causing harm and increasing the chance of human error when failures occur. It is proposed that the problem is not the presence of automation, but rather its inappropriate design. The problem is that the operations are performed appropriately under normal conditions, but there is inadequate feedback and interaction with the humans who must control the overall conduct of the task. When the situations exceed the capabilities of the automatic equipment, then the inadequate feedback leads to difficulties for the human controllers. The problem is that the automation is at an intermediate level of intelligence, powerful enough to take over control that which used to be done by people, but not powerful enough to handle all abnormalities. Moreover, its level of intelligence is insufficient to provide the continual, appropriate feedback that occurs naturally among human operators. To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate. The overall message is that it is possible to reduce error through appropriate design considerations.

  20. Gestonurse: a robotic surgical nurse for handling surgical instruments in the operating room.

    PubMed

    Jacob, Mithun; Li, Yu-Ting; Akingba, George; Wachs, Juan P

    2012-03-01

    While surgeon-scrub nurse collaboration provides a fast, straightforward and inexpensive method of delivering surgical instruments to the surgeon, it often results in "mistakes" (e.g. missing information, ambiguity of instructions and delays). It has been shown that these errors can have a negative impact on the outcome of the surgery. These errors could potentially be reduced or eliminated by introducing robotics into the operating room. Gesture control is a natural and fundamentally sound alternative that allows interaction without disturbing the normal flow of surgery. This paper describes the development of a robotic scrub nurse Gestonurse to support surgeons by passing surgical instruments during surgery as required. The robot responds to recognized hand signals detected through sophisticated computer vision and pattern recognition techniques. Experimental results show that 95% of the gestures were recognized correctly. The gesture recognition algorithm presented is robust to changes in scale and rotation of the hand gestures. The system was compared to human task performance and was found to be only 0.83 s slower on average.

  1. GPS aiding of ocean current determination. [Global Positioning System

    NASA Technical Reports Server (NTRS)

    Mohan, S. N.

    1981-01-01

    The navigational accuracy of an oceangoing vessel using conventional GPS p-code data is examined. The GPS signal is transmitted over two carrier frequencies in the L-band at 1575.42 and 1227.6 MHz. Achievable navigational uncertainties of differenced positional estimates are presented as a function of the parameters of the problem, with particular attention given to the effect of sea-state, user equivalent range error, uncompensated antenna motion, varying delay intervals, and reduced data rate examined in the unaided mode. The unmodeled errors resulting from satellite ephemeris uncertainties are shown to be negligible for the GPS-NDS (Navigation Development) satellites. Requirements are met in relatively calm seas, but accuracy degradation by a factor of at least 2 must be anticipated in heavier sea states. The aided mode of operation is examined, and it is shown that requirements can be met by using an inertial measurement unit (IMU) to aid the GPS receiver operation. Since the use of an IMU would mean higher costs, direct Doppler from the GPS satellites is presented as a viable alternative.

  2. Software Assists in Responding to Anomalous Conditions

    NASA Technical Reports Server (NTRS)

    James, Mark; Kronbert, F.; Weiner, A.; Morgan, T.; Stroozas, B.; Girouard, F.; Hopkins, A.; Wong, L.; Kneubuhl, J.; Malina, R.

    2004-01-01

    Fault Induced Document Retrieval Officer (FIDO) is a computer program that reduces the need for a large and costly team of engineers and/or technicians to monitor the state of a spacecraft and associated ground systems and respond to anomalies. FIDO includes artificial-intelligence components that imitate the reasoning of human experts with reference to a knowledge base of rules that represent failure modes and to a database of engineering documentation. These components act together to give an unskilled operator instantaneous expert assistance and access to information that can enable resolution of most anomalies, without the need for highly paid experts. FIDO provides a system state summary (a configurable engineering summary) and documentation for diagnosis of a potentially failing component that might have caused a given error message or anomaly. FIDO also enables high-level browsing of documentation by use of an interface indexed to the particular error message. The collection of available documents includes information on operations and associated procedures, engineering problem reports, documentation of components, and engineering drawings. FIDO also affords a capability for combining information on the state of ground systems with detailed, hierarchically-organized, hypertext- enabled documentation.

  3. Quantifying acoustic doppler current profiler discharge uncertainty: A Monte Carlo based tool for moving-boat measurements

    USGS Publications Warehouse

    Mueller, David S.

    2017-01-01

    This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when evaluating the uncertainty of moving-boat ADCP measurements.

  4. Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods

    NASA Astrophysics Data System (ADS)

    Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David

    2015-11-01

    Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. Human error in airway facilities.

    DOT National Transportation Integrated Search

    2001-01-01

    This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...

  6. Influence of Forecast Accuracy of Photovoltaic Power Output on Facility Planning and Operation of Microgrid under 30 min Power Balancing Control

    NASA Astrophysics Data System (ADS)

    Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo

    A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.

  7. Personnel reliability impact on petrochemical facilities monitoring system's failure skipping probability

    NASA Astrophysics Data System (ADS)

    Kostyukov, V. N.; Naumenko, A. P.

    2017-08-01

    The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under control - not more than 0.027. In case when only pump and compressor units are under control, the failure skipping risk is not more than 0.022, when the probability of error in operator's action is not more than 0.011. The work output shows that on the basis of the researches results an assessment of operators' reliability can be made in terms of almost any kind of production, but considering only technological capabilities, since operators' psychological and general training considerable vary in different production industries. Using latest technologies of engineering psychology and design of data support systems, situation assessment systems, decision-making and responding system, as well as achievement in condition monitoring in various production industries one can evaluate hazardous condition skipping risk probability considering static, dynamic errors and human factor.

  8. Spinal intra-operative three-dimensional navigation with infra-red tool tracking: correlation between clinical and absolute engineering accuracy

    NASA Astrophysics Data System (ADS)

    Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.

  9. Human Factors Directions for Civil Aviation

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.

    2002-01-01

    Despite considerable progress in understanding human capabilities and limitations, incorporating human factors into aircraft design, operation, and certification, and the emergence of new technologies designed to reduce workload and enhance human performance in the system, most aviation accidents still involve human errors. Such errors occur as a direct or indirect result of untimely, inappropriate, or erroneous actions (or inactions) by apparently well-trained and experienced pilots, controllers, and maintainers. The field of human factors has solved many of the more tractable problems related to simple ergonomics, cockpit layout, symbology, and so on. We have learned much about the relationships between people and machines, but know less about how to form successful partnerships between humans and the information technologies that are beginning to play a central role in aviation. Significant changes envisioned in the structure of the airspace, pilots and controllers' roles and responsibilities, and air/ground technologies will require a similarly significant investment in human factors during the next few decades to ensure the effective integration of pilots, controllers, dispatchers, and maintainers into the new system. Many of the topics that will be addressed are not new because progress in crucial areas, such as eliminating human error, has been slow. A multidisciplinary approach that capitalizes upon human studies and new classes of information, computational models, intelligent analytical tools, and close collaborations with organizations that build, operate, and regulate aviation technology will ensure that the field of human factors meets the challenge.

  10. Advanced Outage and Control Center: Strategies for Nuclear Plant Outage Work Status Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory Weatherby

    The research effort is a part of the Light Water Reactor Sustainability (LWRS) Program. LWRS is a research and development program sponsored by the Department of Energy, performed in close collaboration with industry to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. The LWRS Program serves to help the US nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The Outage Control Center (OCC) Pilot Project was directed at carrying out the applied researchmore » for development and pilot of technology designed to enhance safe outage and maintenance operations, improve human performance and reliability, increase overall operational efficiency, and improve plant status control. Plant outage management is a high priority concern for the nuclear industry from cost and safety perspectives. Unfortunately, many of the underlying technologies supporting outage control are the same as those used in the 1980’s. They depend heavily upon large teams of staff, multiple work and coordination locations, and manual administrative actions that require large amounts of paper. Previous work in human reliability analysis suggests that many repetitive tasks, including paper work tasks, may have a failure rate of 1.0E-3 or higher (Gertman, 1996). With between 10,000 and 45,000 subtasks being performed during an outage (Gomes, 1996), the opportunity for human error of some consequence is a realistic concern. Although a number of factors exist that can make these errors recoverable, reducing and effectively coordinating the sheer number of tasks to be performed, particularly those that are error prone, has the potential to enhance outage efficiency and safety. Additionally, outage management requires precise coordination of work groups that do not always share similar objectives. Outage managers are concerned with schedule and cost, union workers are concerned with performing work that is commensurate with their trade, and support functions (safety, quality assurance, and radiological controls, etc.) are concerned with performing the work within the plants controls and procedures. Approaches to outage management should be designed to increase the active participation of work groups and managers in making decisions that closed the gap between competing objectives and the potential for error and process inefficiency.« less

  11. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays

    PubMed Central

    Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok

    2016-01-01

    The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart. PMID:26821029

  12. The link evaluation terminal for the advanced communications technology satellite experiments program

    NASA Technical Reports Server (NTRS)

    May, Brian D.

    1992-01-01

    The experimental NASA satellite, Advanced Communications Technology Satellite (ACTS), introduces new technology for high throughput 30 to 20 GHz satellite services. Contained in a single communication payload is both a regenerative TDMA system and multiple 800 MHz 'bent pipe' channels routed to spot beams by a switch matrix. While only one mode of operation is typical during any experiment, both modes can operate simultaneously with reduced capability due to sharing of the transponder. NASA-Lewis instituted a ground terminal development program in anticipation of the satellite launch to verify the performance of the switch matrix mode of operations. Specific functions are built into the ground terminal to evaluate rain fade compensation with uplink power control and to monitor satellite transponder performance with bit error rate measurements. These functions were the genesis of the ground terminal's name, Link Evaluation Terminal, often referred to as LET. Connectors are included in LET that allow independent experimenters to run unique modulation or network experiments through ACTS using only the RF transmit and receive portions of LET. Test data indicate that LET will be able to verify important parts of ACTS technology and provide independent experimenters with a useful ground terminal. Lab measurements of major subsystems integrated into LET are presented. Bit error rate is measured with LET in an internal loopback mode.

  13. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  14. Entanglement renormalization, quantum error correction, and bulk causality

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.; Kastoryano, Michael J.

    2017-04-01

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  15. Near field communications technology and the potential to reduce medication errors through multidisciplinary application

    PubMed Central

    Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602

  16. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    PubMed

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  17. Performance of Thermal Mass Flow Meters in a Variable Gravitational Environment

    NASA Technical Reports Server (NTRS)

    Brooker, John E.; Ruff, Gary A.

    2004-01-01

    The performance of five thermal mass flow meters, MKS Instruments 179A and 258C, Unit Instruments UFM-8100, Sierra Instruments 830L, and Hastings Instruments HFM-200, were tested on the KC-135 Reduced Gravity Aircraft in orthogonal, coparallel, and counterparallel orientations relative to gravity. Data was taken throughout the parabolic trajectory where the g-level varied from 0.01 to 1.8 times normal gravity. Each meter was calibrated in normal gravity in the orthogonal position prior to flight followed by ground testing at seven different flow conditions to establish a baseline operation. During the tests, the actual flow rate was measured independently using choked-flow orifices. Gravitational acceleration and attitude had a unique effect on the performance of each meter. All meters operated within acceptable limits at all gravity levels in the calibrated orthogonal position. However, when operated in other orientations, the deviations from the reference flow became substantial for several of the flow meters. Data analysis indicated that the greatest source of error was the effect of orientation, followed by the gravity level. This work emphasized that when operating thermal flow meters in a variable gravity environment, it is critical to orient the meter in the same direction relative to gravity in which it was calibrated. Unfortunately, there was no test in normal gravity that could predict the performance of a meter in reduced gravity. When operating in reduced gravity, all meters indicated within 5 percent of the full scale reading at all flow conditions and orientations.

  18. Vacuum System Upgrade for Extended Q-Range Small-Angle Neutron Scattering Diffractometer (EQ-SANS) at SNS

    DOE PAGES

    Stone, Christopher M.; Williams, Derrick C.; Price, Jeremy P.

    2016-09-23

    The Extended Q-Range Small-Angle Neutron Scattering Diffractometer (EQ-SANS) instrument at the Spallation Neutron Source (SNS), Oak Ridge, Tennessee, incorporates a 69m3 detector vessel with a vacuum system which required an upgrade with respect to performance, ease of operation, and maintenance. The upgrade focused on improving pumping performance as well as optimizing system design to minimize opportunity for operational error. This upgrade provided the following practical contributions: Reduced time required to evacuate from atmospheric pressure to 2mTorr from 500-1,000 minutes to 60-70 minutes Provided turn-key automated control with a multi-faceted interlock for personnel and machine safety.

  19. Vacuum System Upgrade for Extended Q-Range Small-Angle Neutron Scattering Diffractometer (EQ-SANS) at SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, Christopher M.; Williams, Derrick C.; Price, Jeremy P.

    The Extended Q-Range Small-Angle Neutron Scattering Diffractometer (EQ-SANS) instrument at the Spallation Neutron Source (SNS), Oak Ridge, Tennessee, incorporates a 69m3 detector vessel with a vacuum system which required an upgrade with respect to performance, ease of operation, and maintenance. The upgrade focused on improving pumping performance as well as optimizing system design to minimize opportunity for operational error. This upgrade provided the following practical contributions: Reduced time required to evacuate from atmospheric pressure to 2mTorr from 500-1,000 minutes to 60-70 minutes Provided turn-key automated control with a multi-faceted interlock for personnel and machine safety.

  20. Artificial Intelligence Based Optimization for the Se(IV) Removal from Aqueous Solution by Reduced Graphene Oxide-Supported Nanoscale Zero-Valent Iron Composites

    PubMed Central

    Cao, Rensheng; Ruan, Wenqian; Wu, Xianliang; Wei, Xionghui

    2018-01-01

    Highly promising artificial intelligence tools, including neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), were applied in the present study to develop an approach for the evaluation of Se(IV) removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Both GA and PSO were used to optimize the parameters of ANN. The effect of operational parameters (i.e., initial pH, temperature, contact time and initial Se(IV) concentration) on the removal efficiency was examined using response surface methodology (RSM), which was also utilized to obtain a dataset for the ANN training. The ANN-GA model results (with a prediction error of 2.88%) showed a better agreement with the experimental data than the ANN-PSO model results (with a prediction error of 4.63%) and the RSM model results (with a prediction error of 5.56%), thus the ANN-GA model was an ideal choice for modeling and optimizing the Se(IV) removal by the nZVI/rGO composites due to its low prediction error. The analysis of the experimental data illustrates that the removal process of Se(IV) obeyed the Langmuir isotherm and the pseudo-second-order kinetic model. Furthermore, the Se 3d and 3p peaks found in XPS spectra for the nZVI/rGO composites after removing treatment illustrates that the removal of Se(IV) was mainly through the adsorption and reduction mechanisms. PMID:29543753

  1. Artificial Intelligence Based Optimization for the Se(IV) Removal from Aqueous Solution by Reduced Graphene Oxide-Supported Nanoscale Zero-Valent Iron Composites.

    PubMed

    Cao, Rensheng; Fan, Mingyi; Hu, Jiwei; Ruan, Wenqian; Wu, Xianliang; Wei, Xionghui

    2018-03-15

    Highly promising artificial intelligence tools, including neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), were applied in the present study to develop an approach for the evaluation of Se(IV) removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Both GA and PSO were used to optimize the parameters of ANN. The effect of operational parameters (i.e., initial pH, temperature, contact time and initial Se(IV) concentration) on the removal efficiency was examined using response surface methodology (RSM), which was also utilized to obtain a dataset for the ANN training. The ANN-GA model results (with a prediction error of 2.88%) showed a better agreement with the experimental data than the ANN-PSO model results (with a prediction error of 4.63%) and the RSM model results (with a prediction error of 5.56%), thus the ANN-GA model was an ideal choice for modeling and optimizing the Se(IV) removal by the nZVI/rGO composites due to its low prediction error. The analysis of the experimental data illustrates that the removal process of Se(IV) obeyed the Langmuir isotherm and the pseudo-second-order kinetic model. Furthermore, the Se 3d and 3p peaks found in XPS spectra for the nZVI/rGO composites after removing treatment illustrates that the removal of Se(IV) was mainly through the adsorption and reduction mechanisms.

  2. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  3. Stray magnetic-field response of linear birefringent optical current sensors

    NASA Astrophysics Data System (ADS)

    MacDougall, Trevor W.; Hutchinson, Ted F.

    1995-07-01

    It is well known that the line integral, describing Faraday rotation in an optical medium, reduces to zero at low frequencies for a closed path that does not encircle a current source. If the closed optical path possesses linear birefringence in addition to Faraday rotation, the cumulative effects on the state of polarization result in a response to externally located current-carrying conductors. This effect can induce a measurable error of the order of 0.3% during certain steady-state operating conditions.

  4. Culture and error in space: implications from analog environments.

    PubMed

    Helmreich, R L

    2000-09-01

    An ongoing study investigating national, organizational, and professional cultures in aviation and medicine is described. Survey data from 26 nations on 5 continents show highly significant national differences regarding appropriate relationships between leaders and followers, in group vs. individual orientation, and in values regarding adherence to rules and procedures. These findings replicate earlier research on dimensions of national culture. Data collected also isolate significant operational issues in multi-national flight crews. While there are no better or worse cultures, these cultural differences have operational implications for the way crews function in an international space environment. The positive professional cultures of pilots and physicians exhibit a high enjoyment of the job and professional pride. However, a negative component was also identified characterized by a sense of personal invulnerability regarding the effects of stress and fatigue on performance. This misperception of personal invulnerability has operational implications such as failures in teamwork and increased probability of error. A second component of the research examines team error in operational environments. From observational data collected during normal flight operations, new models of threat and error and their management were developed that can be generalized to operations in space and other socio-technological domains. Five categories of crew error are defined and their relationship to training programs in team performance, known generically as Crew Resource Management, is described. The relevance of these data for future spaceflight is discussed.

  5. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  6. (abstract) Mission Operations and Control Assurance: Flight Operations Quality Improvements

    NASA Technical Reports Server (NTRS)

    Welz, Linda L.; Bruno, Kristin J.; Kazz, Sheri L.; Witkowski, Mona M.

    1993-01-01

    Mission Operations and Command Assurance (MO&CA), a recent addition to flight operations teams at JPL. provides a system level function to instill quality in mission operations. MO&CA's primary goal at JPL is to help improve the operational reliability for projects during flight. MO&CA tasks include early detection and correction of process design and procedural deficiencies within projects. Early detection and correction are essential during development of operational procedures and training of operational teams. MO&CA's effort focuses directly on reducing the probability of radiating incorrect commands to a spacecraft. Over the last seven years at JPL, MO&CA has become a valuable asset to JPL flight projects. JPL flight projects have benefited significantly from MO&CA's efforts to contain risk and prevent rather than rework errors. MO&CA's ability to provide direct transfer of knowledge allows new projects to benefit directly from previous and ongoing experience. Since MO&CA, like Total Quality Management (TQM), focuses on continuous improvement of processes and elimination of rework, we recommend that this effort be continued on NASA flight projects.

  7. Comparing taxi clearance input layouts for advancements in flight deck automation for surface operations

    NASA Astrophysics Data System (ADS)

    Cheng, Lara W. S.

    Airport moving maps (AMMs) have been shown to decrease navigation errors, increase taxiing speed, and reduce workload when they depict airport layout, current aircraft position, and the cleared taxi route. However, current technologies are limited in their ability to depict the cleared taxi route due to the unavailability of datacomm or other means of electronically transmitting clearances from ATC to the flight deck. This study examined methods by which pilots can input ATC-issued taxi clearances to support taxi route depictions on the AMM. Sixteen general aviation (GA) pilots used a touchscreen monitor to input taxi clearances using two input layouts, softkeys and QWERTY, each with and without feedforward (graying out invalid inputs). QWERTY yielded more taxi route input errors than the softkeys layout. The presence of feedforward did not produce fewer taxi route input errors than in the non-feedforward condition. The QWERTY layout did reduce taxi clearance input times relative to the softkeys layout, but when feedforward was present this effect was observed only for the longer, 6-segment taxi clearances. It was observed that with the softkeys layout, feedforward reduced input times compared to non-feedforward but only for the 4-segment clearances. Feedforward did not support faster taxi clearance input times for the QWERTY layout. Based on the results and analyses of the present study, it is concluded that for taxi clearance inputs, (1) QWERTY remain the standard for alphanumeric inputs, and (2) feedforward be investigated further, with a focus on participant preference and performance of black-gray contrast of keys.

  8. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    NASA Technical Reports Server (NTRS)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  9. Two-Volt Josephson Arbitrary Waveform Synthesizer Using Wilkinson Dividers.

    PubMed

    Flowers-Jacobs, Nathan E; Fox, Anna E; Dresselhaus, Paul D; Schwall, Robert E; Benz, Samuel P

    2016-09-01

    The root-mean-square (rms) output voltage of the NIST Josephson arbitrary waveform synthesizer (JAWS) has been doubled from 1 V to a record 2 V by combining two new 1 V chips on a cryocooler. This higher voltage will improve calibrations of ac thermal voltage converters and precision voltage measurements that require state-of-the-art quantum accuracy, stability, and signal-to-noise ratio. We achieved this increase in output voltage by using four on-chip Wilkinson dividers and eight inner-outer dc blocks, which enable biasing of eight Josephson junction (JJ) arrays with high-speed inputs from only four high-speed pulse generator channels. This approach halves the number of pulse generator channels required in future JAWS systems. We also implemented on-chip superconducting interconnects between JJ arrays, which reduces systematic errors and enables a new modular chip package. Finally, we demonstrate a new technique for measuring and visualizing the operating current range that reduces the measurement time by almost two orders of magnitude and reveals the relationship between distortion in the output spectrum and output pulse sequence errors.

  10. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  11. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    PubMed

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  12. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  13. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  14. Fault-tolerance thresholds for the surface code with fabrication errors

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2017-10-01

    The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a nontrivial topology such that the quantum information is encoded in the global degrees of freedom (i.e., the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors—permanent faulty components such as missing physical qubits or failed entangling gates—introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the test bed. A known approach to mitigate defective lattices involves the use of primitive swap gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or use of swap gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect-matching decoder. Our numerical analysis is most applicable to two-dimensional chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8 % qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1 % .

  15. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting.

    PubMed

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-11-01

    Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting

    PubMed Central

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-01-01

    Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806

  17. First-year Analysis of the Operating Room Black Box Study.

    PubMed

    Jung, James J; Jüni, Peter; Lebovic, Gerald; Grantcharov, Teodor

    2018-06-18

    To characterize intraoperative errors, events, and distractions, and measure technical skills of surgeons in minimally invasive surgery practice. Adverse events in the operating room (OR) are common contributors of morbidity and mortality in surgical patients. Adverse events often occur due to deviations in performance and environmental factors. Although comprehensive intraoperative data analysis and transparent disclosure have been advocated to better understand how to improve surgical safety, they have rarely been done. We conducted a prospective cohort study in 132 consecutive patients undergoing elective laparoscopic general surgery at an academic hospital during the first year after the definite implementation of a multiport data capture system called the OR Black Box to identify intraoperative errors, events, and distractions. Expert analysts characterized intraoperative distractions, errors, and events, and measured trainee involvement as main operator. Technical skills were compared, crude and risk-adjusted, among the attending surgeon and trainees. Auditory distractions occurred a median of 138 times per case [interquartile range (IQR) 96-190]. At least 1 cognitive distraction appeared in 84 cases (64%). Medians of 20 errors (IQR 14-36) and 8 events (IQR 4-12) were identified per case. Both errors and events occurred often in dissection and reconstruction phases of operation. Technical skills of residents were lower than those of the attending surgeon (P = 0.015). During elective laparoscopic operations, frequent intraoperative errors and events, variation in surgeons' technical skills, and a high amount of environmental distractions were identified using the OR Black Box.

  18. Human factors in surgery: from Three Mile Island to the operating room.

    PubMed

    D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco

    2009-01-01

    Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.

  19. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  20. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  1. The suitability of remotely sensed soil moisture for improving operational flood forecasting

    NASA Astrophysics Data System (ADS)

    Wanders, N.; Karssenberg, D.; de Roo, A.; de Jong, S. M.; Bierkens, M. F. P.

    2013-11-01

    We evaluate the added value of assimilated remotely sensed soil moisture for the European Flood Awareness System (EFAS) and its potential to improve the prediction of the timing and height of the flood peak and low flows. EFAS is an operational flood forecasting system for Europe and uses a distributed hydrological model for flood predictions with lead times up to 10 days. For this study, satellite-derived soil moisture from ASCAT, AMSR-E and SMOS is assimilated into the EFAS system for the Upper Danube basin and results are compared to assimilation of discharge observations only. To assimilate soil moisture and discharge data into EFAS, an Ensemble Kalman Filter (EnKF) is used. Information on the spatial (cross-) correlation of the errors in the satellite products, is included to ensure optimal performance of the EnKF. For the validation, additional discharge observations not used in the EnKF, are used as an independent validation dataset. Our results show that the accuracy of flood forecasts is increased when more discharge observations are assimilated; the Mean Absolute Error (MAE) of the ensemble mean is reduced by 65%. The additional inclusion of satellite data results in a further increase of the performance: forecasts of base flows are better and the uncertainty in the overall discharge is reduced, shown by a 10% reduction in the MAE. In addition, floods are predicted with a higher accuracy and the Continuous Ranked Probability Score (CRPS) shows a performance increase of 5-10% on average, compared to assimilation of discharge only. When soil moisture data is used, the timing errors in the flood predictions are decreased especially for shorter lead times and imminent floods can be forecasted with more skill. The number of false flood alerts is reduced when more data is assimilated into the system and the best performance is achieved with the assimilation of both discharge and satellite observations. The additional gain is highest when discharge observations from both upstream and downstream areas are used in combination with the soil moisture data. These results show the potential of remotely sensed soil moisture observations to improve near-real time flood forecasting in large catchments.

  2. The suitability of remotely sensed soil moisture for improving operational flood forecasting

    NASA Astrophysics Data System (ADS)

    Wanders, N.; Karssenberg, D.; de Roo, A.; de Jong, S. M.; Bierkens, M. F. P.

    2014-06-01

    We evaluate the added value of assimilated remotely sensed soil moisture for the European Flood Awareness System (EFAS) and its potential to improve the prediction of the timing and height of the flood peak and low flows. EFAS is an operational flood forecasting system for Europe and uses a distributed hydrological model (LISFLOOD) for flood predictions with lead times of up to 10 days. For this study, satellite-derived soil moisture from ASCAT (Advanced SCATterometer), AMSR-E (Advanced Microwave Scanning Radiometer - Earth Observing System) and SMOS (Soil Moisture and Ocean Salinity) is assimilated into the LISFLOOD model for the Upper Danube Basin and results are compared to assimilation of discharge observations only. To assimilate soil moisture and discharge data into the hydrological model, an ensemble Kalman filter (EnKF) is used. Information on the spatial (cross-) correlation of the errors in the satellite products, is included to ensure increased performance of the EnKF. For the validation, additional discharge observations not used in the EnKF are used as an independent validation data set. Our results show that the accuracy of flood forecasts is increased when more discharge observations are assimilated; the mean absolute error (MAE) of the ensemble mean is reduced by 35%. The additional inclusion of satellite data results in a further increase of the performance: forecasts of baseflows are better and the uncertainty in the overall discharge is reduced, shown by a 10% reduction in the MAE. In addition, floods are predicted with a higher accuracy and the continuous ranked probability score (CRPS) shows a performance increase of 5-10% on average, compared to assimilation of discharge only. When soil moisture data is used, the timing errors in the flood predictions are decreased especially for shorter lead times and imminent floods can be forecasted with more skill. The number of false flood alerts is reduced when more observational data is assimilated into the system. The added values of the satellite data is largest when these observations are assimilated in combination with distributed discharge observations. These results show the potential of remotely sensed soil moisture observations to improve near-real time flood forecasting in large catchments.

  3. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  4. Tactile Cueing as a Gravitational Substitute for Spatial Navigation During Parabolic Flight

    NASA Technical Reports Server (NTRS)

    Montgomery, K. L.; Beaton, K. H.; Barba, J. M.; Cackler, J. M.; Son, J. H.; Horsfield, S. P.; Wood, S. J.

    2010-01-01

    INTRODUCTION: Spatial navigation requires an accurate awareness of orientation in your environment. The purpose of this experiment was to examine how spatial awareness was impaired with changing gravitational cues during parabolic flight, and the extent to which vibrotactile feedback of orientation could be used to help improve performance. METHODS: Six subjects were restrained in a chair tilted relative to the plane floor, and placed at random positions during the start of the microgravity phase. Subjects reported their orientation using verbal reports, and used a hand-held controller to point to a desired target location presented using a virtual reality video mask. This task was repeated with and without constant tactile cueing of "down" direction using a belt of 8 tactors placed around the mid-torso. Control measures were obtained during ground testing using both upright and tilted conditions. RESULTS: Perceptual estimates of orientation and pointing accuracy were impaired during microgravity or during rotation about an upright axis in 1g. The amount of error was proportional to the amount of chair displacement. Perceptual errors were reduced during movement about a tilted axis on earth. CONCLUSIONS: Reduced perceptual errors during tilts in 1g indicate the importance of otolith and somatosensory cues for maintaining spatial awareness. Tactile cueing may improve navigation in operational environments or clinical populations, providing a non-visual non-auditory feedback of orientation or desired direction heading.

  5. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  6. Composite Configuration Interventional Therapy Robot for the Microwave Ablation of Liver Tumors

    NASA Astrophysics Data System (ADS)

    Cao, Ying-Yu; Xue, Long; Qi, Bo-Jin; Jiang, Li-Pei; Deng, Shuang-Cheng; Liang, Ping; Liu, Jia

    2017-11-01

    The existing interventional therapy robots for the microwave ablation of liver tumors have a poor clinical applicability with a large volume, low positioning speed and complex automatic navigation control. To solve above problems, a composite configuration interventional therapy robot with passive and active joints is developed. The design of composite configuration reduces the size of the robot under the premise of a wide range of movement, and the robot with composite configuration can realizes rapid positioning with operation safety. The cumulative error of positioning is eliminated and the control complexity is reduced by decoupling active parts. The navigation algorithms for the robot are proposed based on solution of the inverse kinematics and geometric analysis. A simulation clinical test method is designed for the robot, and the functions of the robot and the navigation algorithms are verified by the test method. The mean error of navigation is 1.488 mm and the maximum error is 2.056 mm, and the positioning time for the ablation needle is in 10 s. The experimental results show that the designed robot can meet the clinical requirements for the microwave ablation of liver tumors. The composite configuration is proposed in development of the interventional therapy robot for the microwave ablation of liver tumors, which provides a new idea for the structural design of medical robots.

  7. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles

    PubMed Central

    Wang, Xuan; Liu, Jinghong; Zhou, Qianfei

    2016-01-01

    In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145

  8. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.

    PubMed

    Wang, Xuan; Liu, Jinghong; Zhou, Qianfei

    2016-12-25

    In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.

  9. Errors Affect Hypothetical Intertemporal Food Choice in Women

    PubMed Central

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2014-01-01

    Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534

  10. The Cut-Score Operating Function: A New Tool to Aid in Standard Setting

    ERIC Educational Resources Information Center

    Grabovsky, Irina; Wainer, Howard

    2017-01-01

    In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss…

  11. Identification of Error Types in Preservice Teachers' Attempts to Create Fraction Story Problems for Specified Operations

    ERIC Educational Resources Information Center

    McAllister, Cheryl J.; Beaver, Cheryl

    2012-01-01

    The purpose of this research was to determine if recognizable error types exist in the work of preservice teachers required to create story problems for specific fraction operations. Students were given a particular single-operation fraction expression and asked to do the calculation and then create a story problem that would require the use of…

  12. Mathemagical Computing: Order of Operations and New Software.

    ERIC Educational Resources Information Center

    Ecker, Michael W.

    1989-01-01

    Describes mathematical problems which occur when using the computer as a calculator. Considers errors in BASIC calculation and the order of mathematical operations. Identifies errors in spreadsheet and calculator programs. Comments on sorting programs and provides a source for Mathemagical Black Holes. (MVL)

  13. Relationship of employee attitudes and supervisor-controller ratio to en route operational error rates : final report.

    DOT National Transportation Integrated Search

    2002-05-01

    An operational error (OE) results when an air traffic control specialist (ATCS) fails to maintain appropriate separation between aircraft, obstacles, etc. Recent research on OEs has focused on situational and individual characteristics (Center for Na...

  14. Master-slave control with trajectory planning and Bouc-Wen model for tracking control of piezo-driven stage

    NASA Astrophysics Data System (ADS)

    Lu, Xiaojun; Liu, Changli; Chen, Lei

    2018-04-01

    In this paper, a redundant Piezo-driven stage having 3RRR compliant mechanism is introduced, we propose the master-slave control with trajectory planning (MSCTP) strategy and Bouc-Wen model to improve its micro-motion tracking performance. The advantage of the proposed controller lies in that its implementation only requires a simple control strategy without the complexity of modeling to avoid the master PEA's tracking error. The dynamic model of slave PEA system with Bouc-Wen hysteresis is established and identified via particle swarm optimization (PSO) approach. The Piezo-driven stage with operating period T=1s and 2s is implemented to track a prescribed circle. The simulation results show that MSCTP with Bouc-Wen model reduces the trajectory tracking errors to the range of the accuracy of our available measurement.

  15. Ultrasonic density measurement cell design and simulation of non-ideal effects.

    PubMed

    Higuti, Ricardo Tokio; Buiochi, Flávio; Adamowski, Júlio Cezar; de Espinosa, Francisco Montero

    2006-07-01

    This paper presents a theoretical analysis of a density measurement cell using an unidimensional model composed by acoustic and electroacoustic transmission lines in order to simulate non-ideal effects. The model is implemented using matrix operations, and is used to design the cell considering its geometry, materials used in sensor assembly, range of liquid sample properties and signal analysis techniques. The sensor performance in non-ideal conditions is studied, considering the thicknesses of adhesive and metallization layers, and the effect of residue of liquid sample which can impregnate on the sample chamber surfaces. These layers are taken into account in the model, and their effects are compensated to reduce the error on density measurement. The results show the contribution of residue layer thickness to density error and its behavior when two signal analysis methods are used.

  16. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  17. Navigating towards improved surgical safety using aviation-based strategies.

    PubMed

    Kao, Lillian S; Thomas, Eric J

    2008-04-01

    Safety practices in the aviation industry are being increasingly adapted to healthcare in an effort to reduce medical errors and patient harm. However, caution should be applied in embracing these practices because of limited experience in surgical disciplines, lack of rigorous research linking these practices to outcome, and fundamental differences between the two industries. Surgeons should have an in-depth understanding of the principles and data supporting aviation-based safety strategies before routinely adopting them. This paper serves as a review of strategies adapted to improve surgical safety, including the following: implementation of crew resource management in training operative teams; incorporation of simulation in training of technical and nontechnical skills; and analysis of contributory factors to errors using surveys, behavioral marker systems, human factors analysis, and incident reporting. Avenues and challenges for future research are also discussed.

  18. A superconducting gravity gradiometer for measurements from a moving vehicle.

    PubMed

    Moody, M V

    2011-09-01

    A gravity gradiometer designed for operation on an aircraft or ship has been tested in the laboratory. A noise level of 0.53 E (E ≡ 10(-9) s(-2)) rms over a 0.001 to 1 Hz bandwidth has been measured, and the primary error mechanisms have been analyzed and quantified. The design is a continuation in the development of superconducting accelerometer technology at the University of Maryland over more than three decades. A cryogenic instrument presents not only the benefit of reduced thermal noise, but also, the extraordinary stability of superconducting circuits and material properties at very low temperatures. This stability allows precise matching of scale factors and accurate rejection of dynamic errors. The design of the instrument incorporates a number of additional features that further enhance performance in a dynamically noisy environment. © 2011 American Institute of Physics

  19. Polarization-basis tracking scheme for quantum key distribution using revealed sifted key bits.

    PubMed

    Ding, Yu-Yang; Chen, Wei; Chen, Hua; Wang, Chao; Li, Ya-Ping; Wang, Shuang; Yin, Zhen-Qiang; Guo, Guang-Can; Han, Zheng-Fu

    2017-03-15

    The calibration of the polarization basis between the transmitter and receiver is an important task in quantum key distribution. A continuously working polarization-basis tracking scheme (PBTS) will effectively promote the efficiency of the system and reduce the potential security risk when switching between the transmission and calibration modes. Here, we proposed a single-photon level continuously working PBTS using only sifted key bits revealed during an error correction procedure, without introducing additional reference light or interrupting the transmission of quantum signals. We applied the scheme to a polarization-encoding BB84 QKD system in a 50 km fiber channel, and obtained an average quantum bit error rate (QBER) of 2.32% and a standard derivation of 0.87% during 24 h of continuous operation. The stable and relatively low QBER validates the effectiveness of the scheme.

  20. System for delivery of broadcast digital video as an overlay to baseband switched services on a fiber-to-the-home access network

    NASA Astrophysics Data System (ADS)

    Chand, Naresh; Magill, Peter D.; Swaminathan, Venkat S.; Yadvish, R. D.

    1999-04-01

    For low cost fiber-to-the-home (FTTH) passive optical networks (PON), we have studied the delivery of broadcast digital video as an overlay to baseband switched digital services on the same fiber using a single transmitter and a single receiver. We have multiplexed the baseband data at 155.52 Mbps with digital video QPSK channels in the 270 - 1450 MHz range with minimal degradation. We used an additional 860 MHz carrier modulated with 8 Mbps QPSK as a test-signal. An optical to electrical (O/E) receiver using an APD satisfies the power budget needs of ITU-T document G983.x for both class B and C operations (i.e., receiver sensitivity less than -33 dBm for a 10-10 bit error rate) without any FEC for both data and video. The PIN diode O/E receiver nearly satisfies the need for class B operation (-30 dBm receiver sensitivity) of G983 with FEC in QPSK FDM video. For a 155.52 Mbps baseband data transmission and for a given bit error rate, there is approximately 6 dBo1 optical power penalty due to video overlay. Of this, 1 dBo penalty is due to biasing the laser with an extinction ratio reduced from 10 dBo to approximately 6 dBo, and approximately 5 dBo penalty is due to receiver bandwidth increasing from approximately 100 MHz to approximately 1 GHz. The penalty due to receiver is after optimizing the filter for baseband data, and is caused by the reduced value of feedback resistor of the first stage transimpedance amplifier. The optical power penalty for video transmission is about 2 dBo due to reduced optical modulation index.

  1. Error Generation in CATS-Based Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd

    2003-01-01

    This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.

  2. WE-AB-207A-04: Random Undersampled Cone Beam CT: Theoretical Analysis and a Novel Reconstruction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, C; Chen, L; Jia, X

    2016-06-15

    Purpose: Reducing x-ray exposure and speeding up data acquisition motived studies on projection data undersampling. It is an important question that for a given undersampling ratio, what the optimal undersampling approach is. In this study, we propose a new undersampling scheme: random-ray undersampling. We will mathematically analyze its projection matrix properties and demonstrate its advantages. We will also propose a new reconstruction method that simultaneously performs CT image reconstruction and projection domain data restoration. Methods: By representing projection operator under the basis of singular vectors of full projection operator, matrix representations for an undersampling case can be generated and numericalmore » singular value decomposition can be performed. We compared properties of matrices among three undersampling approaches: regular-view undersampling, regular-ray undersampling, and the proposed random-ray undersampling. To accomplish CT reconstruction for random undersampling, we developed a novel method that iteratively performs CT reconstruction and missing projection data restoration via regularization approaches. Results: For a given undersampling ratio, random-ray undersampling preserved mathematical properties of full projection operator better than the other two approaches. This translates to advantages of reconstructing CT images at lower errors. Different types of image artifacts were observed depending on undersampling strategies, which were ascribed to the unique singular vectors of the sampling operators in the image domain. We tested the proposed reconstruction algorithm on a Forbid phantom with only 30% of the projection data randomly acquired. Reconstructed image error was reduced from 9.4% in a TV method to 7.6% in the proposed method. Conclusion: The proposed random-ray undersampling is mathematically advantageous over other typical undersampling approaches. It may permit better image reconstruction at the same undersampling ratio. The novel algorithm suitable for this random-ray undersampling was able to reconstruct high-quality images.« less

  3. Reducing the Familiarity of Conjunction Lures with Pictures

    ERIC Educational Resources Information Center

    Lloyd, Marianne E.

    2013-01-01

    Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…

  4. Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report

    PubMed Central

    Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo

    2013-01-01

    Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451

  5. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  6. Mitigating Photon Jitter in Optical PPM Communication

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2008-01-01

    A theoretical analysis of photon-arrival jitter in an optical pulse-position-modulation (PPM) communication channel has been performed, and now constitutes the basis of a methodology for designing receivers to compensate so that errors attributable to photon-arrival jitter would be minimized or nearly minimized. Photon-arrival jitter is an uncertainty in the estimated time of arrival of a photon relative to the boundaries of a PPM time slot. Photon-arrival jitter is attributable to two main causes: (1) receiver synchronization error [error in the receiver operation of partitioning time into PPM slots] and (2) random delay between the time of arrival of a photon at a detector and the generation, by the detector circuitry, of a pulse in response to the photon. For channels with sufficiently long time slots, photon-arrival jitter is negligible. However, as durations of PPM time slots are reduced in efforts to increase throughputs of optical PPM communication channels, photon-arrival jitter becomes a significant source of error, leading to significant degradation of performance if not taken into account in design. For the purpose of the analysis, a receiver was assumed to operate in a photon- starved regime, in which photon counts follow a Poisson distribution. The analysis included derivation of exact equations for symbol likelihoods in the presence of photon-arrival jitter. These equations describe what is well known in the art as a matched filter for a channel containing Gaussian noise. These equations would yield an optimum receiver if they could be implemented in practice. Because the exact equations may be too complex to implement in practice, approximations that would yield suboptimal receivers were also derived.

  7. Task planning with uncertainty for robotic systems. Thesis

    NASA Technical Reports Server (NTRS)

    Cao, Tiehua

    1993-01-01

    In a practical robotic system, it is important to represent and plan sequences of operations and to be able to choose an efficient sequence from them for a specific task. During the generation and execution of task plans, different kinds of uncertainty may occur and erroneous states need to be handled to ensure the efficiency and reliability of the system. An approach to task representation, planning, and error recovery for robotic systems is demonstrated. Our approach to task planning is based on an AND/OR net representation, which is then mapped to a Petri net representation of all feasible geometric states and associated feasibility criteria for net transitions. Task decomposition of robotic assembly plans based on this representation is performed on the Petri net for robotic assembly tasks, and the inheritance of properties of liveness, safeness, and reversibility at all levels of decomposition are explored. This approach provides a framework for robust execution of tasks through the properties of traceability and viability. Uncertainty in robotic systems are modeled by local fuzzy variables, fuzzy marking variables, and global fuzzy variables which are incorporated in fuzzy Petri nets. Analysis of properties and reasoning about uncertainty are investigated using fuzzy reasoning structures built into the net. Two applications of fuzzy Petri nets, robot task sequence planning and sensor-based error recovery, are explored. In the first application, the search space for feasible and complete task sequences with correct precedence relationships is reduced via the use of global fuzzy variables in reasoning about subgoals. In the second application, sensory verification operations are modeled by mutually exclusive transitions to reason about local and global fuzzy variables on-line and automatically select a retry or an alternative error recovery sequence when errors occur. Task sequencing and task execution with error recovery capability for one and multiple soft components in robotic systems are investigated.

  8. Verification of real-time WSA-ENLIL+Cone simulations of CME arrival-time at the CCMC/SWRC from 2010-2016

    NASA Astrophysics Data System (ADS)

    Wold, A. M.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; MacNeice, P. J.; Jian, L. K.

    2017-12-01

    The Wang-Sheeley-Arge (WSA)-ENLIL+Cone model is used extensively in space weather operations world-wide to model CME propagation. As such, it is important to assess its performance. We present validation results of the WSA-ENLIL+Cone model installed at the Community Coordinated Modeling Center (CCMC) and executed in real-time by the CCMC/Space Weather Research Center (SWRC). CCMC/SWRC uses the WSA-ENLIL+Cone model to predict CME arrivals at NASA missions throughout the inner heliosphere. In this work we compare model predicted CME arrival-times to in-situ ICME leading edge measurements near Earth, STEREO-A and STEREO-B for simulations completed between March 2010-December 2016 (over 1,800 CMEs). We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For all predicted CME arrivals, the hit rate is 0.5, and the false alarm rate is 0.1. For the 273 events where the CME was predicted to arrive at Earth, STEREO-A, or STEREO-B and we observed an arrival (hit), the mean absolute arrival-time prediction error was 10.4 ± 0.9 hours, with a tendency to early prediction error of -4.0 hours. We show the dependence of the arrival-time error on CME input parameters. We also explore the impact of the multi-spacecraft observations used to initialize the model CME inputs by comparing model verification results before and after the STEREO-B communication loss (since September 2014) and STEREO-A side-lobe operations (August 2014-December 2015). There is an increase of 1.7 hours in the CME arrival time error during single, or limited two-viewpoint periods, compared to the three-spacecraft viewpoint period. This trend would apply to a future space weather mission at L5 or L4 as another coronagraph viewpoint to reduce CME arrival time errors compared to a single L1 viewpoint.

  9. The Development and Deployment of a Maintenance Operations Safety Survey.

    PubMed

    Langer, Marie; Braithwaite, Graham R

    2016-11-01

    Based on the line operations safety audit (LOSA), two studies were conducted to develop and deploy an equivalent tool for aircraft maintenance: the maintenance operations safety survey (MOSS). Safety in aircraft maintenance is currently measured reactively, based on the number of audit findings, reportable events, incidents, or accidents. Proactive safety tools designed for monitoring routine operations, such as flight data monitoring and LOSA, have been developed predominantly for flight operations. In Study 1, development of MOSS, 12 test peer-to-peer observations were collected to investigate the practicalities of this approach. In Study 2, deployment of MOSS, seven expert observers collected 56 peer-to-peer observations of line maintenance checks at four stations. Narrative data were coded and analyzed according to the threat and error management (TEM) framework. In Study 1, a line check was identified as a suitable unit of observation. Communication and third-party data management were the key factors in gaining maintainer trust. Study 2 identified that on average, maintainers experienced 7.8 threats (operational complexities) and committed 2.5 errors per observation. The majority of threats and errors were inconsequential. Links between specific threats and errors leading to 36 undesired states were established. This research demonstrates that observations of routine maintenance operations are feasible. TEM-based results highlight successful management strategies that maintainers employ on a day-to-day basis. MOSS is a novel approach for safety data collection and analysis. It helps practitioners understand the nature of maintenance errors, promote an informed culture, and support safety management systems in the maintenance domain. © 2016, Human Factors and Ergonomics Society.

  10. The Development and Deployment of a Maintenance Operations Safety Survey

    PubMed Central

    Langer, Marie; Braithwaite, Graham R.

    2016-01-01

    Objective: Based on the line operations safety audit (LOSA), two studies were conducted to develop and deploy an equivalent tool for aircraft maintenance: the maintenance operations safety survey (MOSS). Background: Safety in aircraft maintenance is currently measured reactively, based on the number of audit findings, reportable events, incidents, or accidents. Proactive safety tools designed for monitoring routine operations, such as flight data monitoring and LOSA, have been developed predominantly for flight operations. Method: In Study 1, development of MOSS, 12 test peer-to-peer observations were collected to investigate the practicalities of this approach. In Study 2, deployment of MOSS, seven expert observers collected 56 peer-to-peer observations of line maintenance checks at four stations. Narrative data were coded and analyzed according to the threat and error management (TEM) framework. Results: In Study 1, a line check was identified as a suitable unit of observation. Communication and third-party data management were the key factors in gaining maintainer trust. Study 2 identified that on average, maintainers experienced 7.8 threats (operational complexities) and committed 2.5 errors per observation. The majority of threats and errors were inconsequential. Links between specific threats and errors leading to 36 undesired states were established. Conclusion: This research demonstrates that observations of routine maintenance operations are feasible. TEM-based results highlight successful management strategies that maintainers employ on a day-to-day basis. Application: MOSS is a novel approach for safety data collection and analysis. It helps practitioners understand the nature of maintenance errors, promote an informed culture, and support safety management systems in the maintenance domain. PMID:27411354

  11. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    PubMed Central

    2018-01-01

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics. PMID:29534477

  12. Toward a new culture in verified quantum operations

    NASA Astrophysics Data System (ADS)

    Flammia, Steve

    Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.

  13. 78 FR 11237 - Public Hearing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ... management of human error in its operations and system safety programs, and the status of PTC implementation... UP's safety management policies and programs associated with human error, operational accident and... Chairman of the Board of Inquiry 2. Introduction of the Board of Inquiry and Technical Panel 3...

  14. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  15. Decreasing the occurrence of intraoperative technical errors through periodic simple show, tell and learn method.

    PubMed

    Steinberg, Ely L; Amar, Eyal; Albagli, Assaf; Rath, Ehud; Salai, Moshe

    2014-08-01

    Technical errors (TE) that occur during surgery for treating fractures are considered as being preventable by good preoperative planning and surgeon education. This prospective study evaluated a new instructional method for improving surgical outcomes that involved assessing surgeons' own recent performances. Postoperative radiographs from two groups of patients were assessed during consecutive 4-month periods. 350 operations were included in the Early Group and 411 operations in the Late Group. All the TE that occurred during the first period were reviewed and discussed among the residents and the consultant surgeons who had performed those operations. The same procedure was followed 4 months later. The TE were classified as minor, moderate and major. The two groups included the same 41 surgeons. The most common TE were: insufficient reduction, varus and valgus malalignment and prominent hardware. The total number of errors dropped significantly, from 52 (14.7%) during the first period to 26 (6.3%) during the second period (p = 0.0003). The TE score severity dropped from 81 to 38, respectively (p = 0.0001). The most affected regions were, the humerus (p < 000.1), midshaft femur (p = 0.007), proximal femur (p = 0.004) and radius (p = 0.008). Most of the gains were made in the moderate category (p = 0.0001). The consultants performed statistically better than the residents in the first period (12% vs. 20%, p = 0.036), but almost similar to the residents in the second period (5.3% vs. 9%, p = 0.164). A TE index was calculated by dividing the accumulated sum by the number of operations and it dropped in both groups from 0.2 and 0.3 to 0.09 and 0.09, respectively. Intraoperative TE can be significantly reduced by periodic performance evaluations in a seminar setting during which groups of surgeons can review the TE that they and their colleagues had made during recent orthopaedic surgical procedures. Level II. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A Homing Missile Control System to Reduce the Effects of Radome Diffraction

    NASA Technical Reports Server (NTRS)

    Smith, Gerald L.

    1960-01-01

    The problem of radome diffraction in radar-controlled homing missiles at high speeds and high altitudes is considered from the point of view of developing a control system configuration which will alleviate the deleterious effects of the diffraction. It is shown that radome diffraction is in essence a kinematic feedback of body angular velocities which causes the radar to sense large apparent line-of-sight angular velocities. The normal control system cannot distinguish between the erroneous and actual line-of-sight rates, and entirely wrong maneuvers are produced which result in large miss distances. The problem is resolved by adding to the control system a special-purpose computer which utilizes measured body angular velocity to extract from the radar output true line-of-sight information for use in steering the missile. The computer operates on the principle of sampling and storing the radar output at instants when the body angular velocity is low and using this stored information for maneuvering commands. In addition, when the angular velocity is not low the computer determines a radome diffraction compensation which is subtracted from the radar output to reduce the error in the sampled information. Analog simulation results for the proposed control system operating in a coplanar (vertical plane) attack indicate a potential decrease in miss distance to an order of magnitude below that for a conventional system. Effects of glint noise, random target maneuvers, initial heading errors, and missile maneuverability are considered in the investigation.

  17. Methods and apparatus for reducing peak wind turbine loads

    DOEpatents

    Moroz, Emilian Mieczyslaw

    2007-02-13

    A method for reducing peak loads of wind turbines in a changing wind environment includes measuring or estimating an instantaneous wind speed and direction at the wind turbine and determining a yaw error of the wind turbine relative to the measured instantaneous wind direction. The method further includes comparing the yaw error to a yaw error trigger that has different values at different wind speeds and shutting down the wind turbine when the yaw error exceeds the yaw error trigger corresponding to the measured or estimated instantaneous wind speed.

  18. Simulation: learning from mistakes while building communication and teamwork.

    PubMed

    Kuehster, Christina R; Hall, Carla D

    2010-01-01

    Medical errors are one of the leading causes of death annually in the United States. Many of these errors are related to poor communication and/or lack of teamwork. Using simulation as a teaching modality provides a dual role in helping to reduce these errors. Thorough integration of clinical practice with teamwork and communication in a safe environment increases the likelihood of reducing the error rates in medicine. By allowing practitioners to make potential errors in a safe environment, such as simulation, these valuable lessons improve retention and will rarely be repeated.

  19. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  20. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  1. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  2. Operator assistant to support deep space network link monitor and control

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Desai, Rajiv; Martinez, Elmain

    1992-01-01

    Preparing the Deep Space Network (DSN) stations to support spacecraft missions (referred to as pre-cal, for pre-calibration) is currently an operator and time intensive activity. Operators are responsible for sending and monitoring several hundred operator directivities, messages, and warnings. Operator directives are used to configure and calibrate the various subsystems (antenna, receiver, etc.) necessary to establish a spacecraft link. Messages and warnings are issued by the subsystems upon completion of an operation, changes of status, or an anomalous condition. Some points of pre-cal are logically parallel. Significant time savings could be realized if the existing Link Monitor and Control system (LMC) could support the operator in exploiting the parallelism inherent in pre-cal activities. Currently, operators may work on the individual subsystems in parallel, however, the burden of monitoring these parallel operations resides solely with the operator. Messages, warnings, and directives are all presented as they are received; without being correlated to the event that triggered them. Pre-cal is essentially an overhead activity. During pre-cal, no mission is supported, and no other activity can be performed using the equipment in the link. Therefore, it is highly desirable to reduce pre-cal time as much as possible. One approach to do this, as well as to increase efficiency and reduce errors, is the LMC Operator Assistant (OA). The LMC OA prototype demonstrates an architecture which can be used in concert with the existing LMC to exploit parallelism in pre-cal operations while providing the operators with a true monitoring capability, situational awareness and positive control. This paper presents an overview of the LMC OA architecture and the results from initial prototyping and test activities.

  3. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    PubMed

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  4. Monitoring Method of Cutting Force by Using Additional Spindle Sensors

    NASA Astrophysics Data System (ADS)

    Sarhan, Ahmed Aly Diaa; Matsubara, Atsushi; Sugihara, Motoyuki; Saraie, Hidenori; Ibaraki, Soichi; Kakino, Yoshiaki

    This paper describes a monitoring method of cutting forces for end milling process by using displacement sensors. Four eddy-current displacement sensors are installed on the spindle housing of a machining center so that they can detect the radial motion of the rotating spindle. Thermocouples are also attached to the spindle structure in order to examine the thermal effect in the displacement sensing. The change in the spindle stiffness due to the spindle temperature and the speed is investigated as well. Finally, the estimation performance of cutting forces using the spindle displacement sensors is experimentally investigated by machining tests on carbon steel in end milling operations under different cutting conditions. It is found that the monitoring errors are attributable to the thermal displacement of the spindle, the time lag of the sensing system, and the modeling error of the spindle stiffness. It is also shown that the root mean square errors between estimated and measured amplitudes of cutting forces are reduced to be less than 20N with proper selection of the linear stiffness.

  5. The upside of noise: engineered dissipation as a resource in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2017-09-01

    Historically, noise in superconducting circuits has been considered an obstacle to be removed. A large fraction of the research effort in designing superconducting circuits has focused on noise reduction, with great success, as coherence times have increased by four orders of magnitude in the past two decades. However, noise and dissipation can never be fully eliminated, and further, a rapidly growing body of theoretical and experimental work has shown that carefully tuned noise, in the form of engineered dissipation, can be a profoundly useful tool in designing and operating quantum circuits. In this article, I review important applications of engineered dissipation, including state generation, state stabilization, and autonomous quantum error correction, where engineered dissipation can mitigate the effect of intrinsic noise, reducing logical error rates in quantum information processing. Further, I provide a pedagogical review of the basic noise processes in superconducting qubits (photon loss and phase noise), and argue that any dissipative mechanism which can correct photon loss errors is very likely to automatically suppress dephasing. I also discuss applications for quantum simulation, and possible future research directions.

  6. A prototype automatic phase compensation module

    NASA Technical Reports Server (NTRS)

    Terry, John D.

    1992-01-01

    The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.

  7. Flexible sequential designs for multi-arm clinical trials.

    PubMed

    Magirr, D; Stallard, N; Jaki, T

    2014-08-30

    Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Bandwidth efficient CCSDS coding standard proposals

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan

    1992-01-01

    The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.

  9. Design of a Pneumatic Tool for Manual Drilling Operations in Confined Spaces

    NASA Astrophysics Data System (ADS)

    Janicki, Benjamin

    This master's thesis describes the design process and testing results for a pneumatically actuated, manually-operated tool for confined space drilling operations. The purpose of this device is to back-drill pilot holes inside a commercial airplane wing. It is lightweight, and a "locator pin" enables the operator to align the drill over a pilot hole. A suction pad stabilizes the system, and an air motor and flexible drive shaft power the drill. Two testing procedures were performed to determine the practicality of this prototype. The first was the "offset drill test", which qualified the exit hole position error due to an initial position error relative to the original pilot hole. The results displayed a linear relationship, and it was determined that position errors of less than .060" would prevent the need for rework, with errors of up to .030" considered acceptable. For the second test, a series of holes were drilled with the pneumatic tool and analyzed for position error, diameter range, and cycle time. The position errors and hole diameter range were within the allowed tolerances. The average cycle time was 45 seconds, 73 percent of which was for drilling the hole, and 27 percent of which was for positioning the device. Recommended improvements are discussed in the conclusion, and include a more durable flexible drive shaft, a damper for drill feed control, and a more stable locator pin.

  10. Commentary: Reducing diagnostic errors: another role for checklists?

    PubMed

    Winters, Bradford D; Aswani, Monica S; Pronovost, Peter J

    2011-03-01

    Diagnostic errors are a widespread problem, although the true magnitude is unknown because they cannot currently be measured validly. These errors have received relatively little attention despite alarming estimates of associated harm and death. One promising intervention to reduce preventable harm is the checklist. This intervention has proven successful in aviation, in which situations are linear and deterministic (one alarm goes off and a checklist guides the flight crew to evaluate the cause). In health care, problems are multifactorial and complex. A checklist has been used to reduce central-line-associated bloodstream infections in intensive care units. Nevertheless, this checklist was incorporated in a culture-based safety program that engaged and changed behaviors and used robust measurement of infections to evaluate progress. In this issue, Ely and colleagues describe how three checklists could reduce the cognitive biases and mental shortcuts that underlie diagnostic errors, but point out that these tools still need to be tested. To be effective, they must reduce diagnostic errors (efficacy) and be routinely used in practice (effectiveness). Such tools must intuitively support how the human brain works, and under time pressures, clinicians rarely think in conditional probabilities when making decisions. To move forward, it is necessary to accurately measure diagnostic errors (which could come from mapping out the diagnostic process as the medication process has done and measuring errors at each step) and pilot test interventions such as these checklists to determine whether they work.

  11. Research on the Error Characteristics of a 110 kV Optical Voltage Transformer under Three Conditions: In the Laboratory, Off-Line in the Field and During On-Line Operation

    PubMed Central

    Xiao, Xia; Hu, Haoliang; Xu, Yan; Lei, Min; Xiong, Qianzhu

    2016-01-01

    Optical voltage transformers (OVTs) have been applied in power systems. When performing accuracy performance tests of OVTs large differences exist between the electromagnetic environment and the temperature variation in the laboratory and on-site. Therefore, OVTs may display different error characteristics under different conditions. In this paper, OVT prototypes with typical structures were selected to be tested for the error characteristics with the same testing equipment and testing method. The basic accuracy, the additional error caused by temperature and the adjacent phase in the laboratory, the accuracy in the field off-line, and the real-time monitoring error during on-line operation were tested. The error characteristics under the three conditions—laboratory, in the field off-line and during on-site operation—were compared and analyzed. The results showed that the effect of the transportation process, electromagnetic environment and the adjacent phase on the accuracy of OVTs could be ignored for level 0.2, but the error characteristics of OVTs are dependent on the environmental temperature and are sensitive to the temperature gradient. The temperature characteristics during on-line operation were significantly superior to those observed in the laboratory. PMID:27537895

  12. Using Automated Writing Evaluation to Reduce Grammar Errors in Writing

    ERIC Educational Resources Information Center

    Liao, Hui-Chuan

    2016-01-01

    Despite the recent development of automated writing evaluation (AWE) technology and the growing interest in applying this technology to language classrooms, few studies have looked at the effects of using AWE on reducing grammatical errors in L2 writing. This study identified the primary English grammatical error types made by 66 Taiwanese…

  13. Using Six Sigma to reduce medication errors in a home-delivery pharmacy service.

    PubMed

    Castle, Lon; Franzblau-Isaac, Ellen; Paulsen, Jim

    2005-06-01

    Medco Health Solutions, Inc. conducted a project to reduce medication errors in its home-delivery service, which is composed of eight prescription-processing pharmacies, three dispensing pharmacies, and six call-center pharmacies. Medco uses the Six Sigma methodology to reduce process variation, establish procedures to monitor the effectiveness of medication safety programs, and determine when these efforts do not achieve performance goals. A team reviewed the processes in home-delivery pharmacy and suggested strategies to improve the data-collection and medication-dispensing practices. A variety of improvement activities were implemented, including a procedure for developing, reviewing, and enhancing sound-alike/look-alike (SALA) alerts and system enhancements to improve processing consistency across the pharmacies. "External nonconformances" were reduced for several categories of medication errors, including wrong-drug selection (33%), wrong directions (49%), and SALA errors (69%). Control charts demonstrated evidence of sustained process improvement and actual reduction in specific medication error elements. Establishing a continuous quality improvement process to ensure that medication errors are minimized is critical to any health care organization providing medication services.

  14. A fast radiative transfer method for the simulation of visible satellite imagery

    NASA Astrophysics Data System (ADS)

    Scheck, Leonhard; Frèrebeau, Pascal; Buras-Schnell, Robert; Mayer, Bernhard

    2016-05-01

    A computationally efficient radiative transfer method for the simulation of visible satellite images is presented. The top of atmosphere reflectance is approximated by a function depending on vertically integrated optical depths and effective particle sizes for water and ice clouds, the surface albedo, the sun and satellite zenith angles and the scattering angle. A look-up table (LUT) for this reflectance function is generated by means of the discrete ordinate method (DISORT). For a constant scattering angle the reflectance is a relatively smooth and symmetric function of the two zenith angles, which can be well approximated by the lowest-order terms of a 2D Fourier series. By storing only the lowest Fourier coefficients and adopting a non-equidistant grid for the scattering angle, the LUT is reduced to a size of 21 MB per satellite channel. The computation of the top of atmosphere reflectance requires only the calculation of the cloud parameters from the model state and the evaluation and interpolation of the reflectance function using the compressed LUT and is thus orders of magnitude faster than DISORT. The accuracy of the method is tested by generating synthetic satellite images for the 0.6 μm and 0.8 μm channels of the SEVIRI instrument for operational COSMO-DE model forecasts from the German Weather Service (DWD) and comparing them to DISORT results. For a test period in June the root mean squared absolute reflectance error is about 10-2 and the mean relative reflectance error is less than 2% for both channels. For scattering angles larger than 170 ° the rapid variation of reflectance with the particle size related to the backscatter glory reduces the accuracy and the errors increase by a factor of 3-4. Speed and accuracy of the new method are sufficient for operational data assimilation and high-resolution model verification applications.

  15. Amorphous In-Ga-Zn-O Thin Film Transistor Current-Scaling Pixel Electrode Circuit for Active-Matrix Organic Light-Emitting Displays

    NASA Astrophysics Data System (ADS)

    Chen, Charlene; Abe, Katsumi; Fung, Tze-Ching; Kumomi, Hideya; Kanicki, Jerzy

    2009-03-01

    In this paper, we analyze application of amorphous In-Ga-Zn-O thin film transistors (a-InGaZnO TFTs) to current-scaling pixel electrode circuit that could be used for 3-in. quarter video graphics array (QVGA) full color active-matrix organic light-emitting displays (AM-OLEDs). Simulation results, based on a-InGaZnO TFT and OLED experimental data, show that both device sizes and operational voltages can be reduced when compare to the same circuit using hydrogenated amorphous silicon (a-Si:H) TFTs. Moreover, the a-InGaZnO TFT pixel circuit can compensate for the drive TFT threshold voltage variation (ΔVT) within acceptable operating error range.

  16. Estimation of an accuracy index of a diagnostic biomarker when the reference biomarker is continuous and measured with error.

    PubMed

    Wu, Mixia; Zhang, Dianchen; Liu, Aiyi

    2016-01-01

    New biomarkers continue to be developed for the purpose of diagnosis, and their diagnostic performances are typically compared with an existing reference biomarker used for the same purpose. Considerable amounts of research have focused on receiver operating characteristic curves analysis when the reference biomarker is dichotomous. In the situation where the reference biomarker is measured on a continuous scale and dichotomization is not practically appealing, an index was proposed in the literature to measure the accuracy of a continuous biomarker, which is essentially a linear function of the popular Kendall's tau. We consider the issue of estimating such an accuracy index when the continuous reference biomarker is measured with errors. We first investigate the impact of measurement errors on the accuracy index, and then propose methods to correct for the bias due to measurement errors. Simulation results show the effectiveness of the proposed estimator in reducing biases. The methods are exemplified with hemoglobin A1c measurements obtained from both the central lab and a local lab to evaluate the accuracy of the mean data obtained from the metered blood glucose monitoring against the centrally measured hemoglobin A1c from a behavioral intervention study for families of youth with type 1 diabetes.

  17. Complications after carotid endarterectomy are related to surgical errors in less than one-fifth of cases. Swedvasc--The Swedish Vascular Registry and The Quality Committee for Carotid Artery Surgery.

    PubMed

    Troëng, T; Bergqvist, D; Norrving, B; Ahari, A

    1999-07-01

    to study possible relations between indications, contraindications and surgical technique and stroke and/or death within 30 days of carotid endarterectomy (CEA). analysis of hospital records for patients identified in a national vascular registry. during 1995-1996, 1518 patients were reported to the Swedish Vascular Registry - Swedvasc. Among these the sixty-five with a stroke and/or death within 30 days were selected for study. Complete surgical records were reviewed by three approved reviewers using predetermined criteria for indications and possible errors. an error of surgical technique or postoperative management was found in eleven patients (17%). In six cases (9%) the indication was inappropriate or there was an obvious contraindication. The indication was questionable in fourteen (21.5%). Half of the patients (52.5%) had surgery for an appropriate indication, and no contraindication or error in surgical technique or management was identified. more than half the complications of CEA represent the "method cost", i.e. the indication, risk and surgical technique were correct. However, the stroke and/or death rate might be reduced if all operations conformed to agreed criteria. Copyright 1999 W.B. Saunders Company Ltd.

  18. SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.

    PubMed

    Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao

    2014-08-08

    Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.

  19. Human Factors Process Task Analysis Liquid Oxygen Pump Acceptance Test Procedure for the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.

    2002-01-01

    A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiCostanzo, D; Ayan, A; Woollard, J

    Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate viamore » a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.« less

Top