Sample records for failure detection system

  1. Development of an adaptive failure detection and identification system for detecting aircraft control element failures

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1990-01-01

    A methodology for designing a failure detection and identification (FDI) system to detect and isolate control element failures in aircraft control systems is reviewed. An FDI system design for a modified B-737 aircraft resulting from this methodology is also reviewed, and the results of evaluating this system via simulation are presented. The FDI system performed well in a no-turbulence environment, but it experienced an unacceptable number of false alarms in atmospheric turbulence. An adaptive FDI system, which adjusts thresholds and other system parameters based on the estimated turbulence level, was developed and evaluated. The adaptive system performed well over all turbulence levels simulated, reliably detecting all but the smallest magnitude partially-missing-surface failures.

  2. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  3. Automatic patient respiration failure detection system with wireless transmission

    NASA Technical Reports Server (NTRS)

    Dimeff, J.; Pope, J. M.

    1968-01-01

    Automatic respiration failure detection system detects respiration failure in patients with a surgically implanted tracheostomy tube, and actuates an audible and/or visual alarm. The system incorporates a miniature radio transmitter so that the patient is unencumbered by wires yet can be monitored from a remote location.

  4. Design and evaluation of a failure detection and isolation algorithm for restructurable control systems

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.; Hsu, John Y.

    1986-01-01

    The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.

  5. Failure Detecting Method of Fault Current Limiter System with Rectifier

    NASA Astrophysics Data System (ADS)

    Tokuda, Noriaki; Matsubara, Yoshio; Asano, Masakuni; Ohkuma, Takeshi; Sato, Yoshibumi; Takahashi, Yoshihisa

    A fault current limiter (FCL) is extensively needed to suppress fault current, particularly required for trunk power systems connecting high-voltage transmission lines, such as 500kV class power system which constitutes the nucleus of the electric power system. We proposed a new type FCL system (rectifier type FCL), consisting of solid-state diodes, DC reactor and bypass AC reactor, and demonstrated the excellent performances of this FCL by developing the small 6.6kV and 66kV model. It is important to detect the failure of power devices used in the rectifier under the normal operating condition, for keeping the excellent reliability of the power system. In this paper, we have proposed a new failure detecting method of power devices most suitable for the rectifier type FCL. This failure detecting system is simple and compact. We have adapted the proposed system to the 66kV prototype single-phase model and successfully demonstrated to detect the failure of power devices.

  6. Detection, Diagnosis and Prognosis: Contribution to the energy challenge: Proceedings of the Meeting of the Mechanical Failures Prevention Group

    NASA Technical Reports Server (NTRS)

    Shives, T. R. (Editor); Willard, W. A. (Editor)

    1981-01-01

    The contribution of failure detection, diagnosis and prognosis to the energy challenge is discussed. Areas of special emphasis included energy management, techniques for failure detection in energy related systems, improved prognostic techniques for energy related systems and opportunities for detection, diagnosis and prognosis in the energy field.

  7. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  8. Artificial-neural-network-based failure detection and isolation

    NASA Astrophysics Data System (ADS)

    Sadok, Mokhtar; Gharsalli, Imed; Alouani, Ali T.

    1998-03-01

    This paper presents the design of a systematic failure detection and isolation system that uses the concept of failure sensitive variables (FSV) and artificial neural networks (ANN). The proposed approach was applied to tube leak detection in a utility boiler system. Results of the experimental testing are presented in the paper.

  9. Real-time diagnostics of the reusable rocket engine using on-line system identification

    NASA Technical Reports Server (NTRS)

    Guo, T.-H.; Merrill, W.; Duyar, A.

    1990-01-01

    A model-based failure diagnosis system has been proposed for real-time diagnosis of SSME failures. Actuation, sensor, and system degradation failure modes are all considered by the proposed system. In the case of SSME actuation failures, it was shown that real-time identification can effectively be used for failure diagnosis purposes. It is a direct approach since it reduces the detection, isolation, and the estimation of the extent of the failures to the comparison of parameter values before and after the failure. As with any model-based failure detection system, the proposed approach requires a fault model that embodies the essential characteristics of the failure process. The proposed diagnosis approach has the added advantage that it can be used as part of an intelligent control system for failure accommodation purposes.

  10. Control system failure monitoring using generalized parity relations. M.S. Thesis Interim Technical Report

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan Mauritz

    1991-01-01

    Many applications require that a control system must be tolerant to the failure of its components. This is especially true for large space-based systems that must work unattended and with long periods between maintenance. Fault tolerance can be obtained by detecting the failure of the control system component, determining which component has failed, and reconfiguring the system so that the failed component is isolated from the controller. Component failure detection experiments that were conducted on an experimental space structure, the NASA Langley Mini-Mast are presented. Two methodologies for failure detection and isolation (FDI) exist that do not require the specification of failure modes and are applicable to both actuators and sensors. These methods are known as the Failure Detection Filter and the method of Generalized Parity Relations. The latter method was applied to three different sensor types on the Mini-Mast. Failures were simulated in input-output data that were recorded during operation of the Mini-Mast. Both single and double sensor parity relations were tested and the effect of several design parameters on the performance of these relations is discussed. The detection of actuator failures is also treated. It is shown that in all the cases it is possible to identify the parity relations directly from input-output data. Frequency domain analysis is used to explain the behavior of the parity relations.

  11. Syndromic surveillance for health information system failures: a feasibility study.

    PubMed

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-05-01

    To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65-0.85. Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures.

  12. A geometric approach to failure detection and identification in linear systems

    NASA Technical Reports Server (NTRS)

    Massoumnia, M. A.

    1986-01-01

    Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.

  13. A dual-mode generalized likelihood ratio approach to self-reorganizing digital flight control system design

    NASA Technical Reports Server (NTRS)

    Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.

    1975-01-01

    The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.

  14. Study of an automatic trajectory following control system

    NASA Technical Reports Server (NTRS)

    Vanlandingham, H. F.; Moose, R. L.; Zwicke, P. E.; Lucas, W. H.; Brinkley, J. D.

    1983-01-01

    It is shown that the estimator part of the Modified Partitioned Adaptive Controller, (MPAC) developed for nonlinear aircraft dynamics of a small jet transport can adapt to sensor failures. In addition, an investigation is made into the potential usefulness of the configuration detection technique used in the MPAC and the failure detection filter is developed that determines how a noise plant output is associated with a line or plane characteristic of a failure. It is shown by computer simulation that the estimator part and the configuration detection part of the MPAC can readily adapt to actuator and sensor failures and that the failure detection filter technique cannot detect actuator or sensor failures accurately for this type of system because of the plant modeling errors. In addition, it is shown that the decision technique, developed for the failure detection filter, can accurately determine that the plant output is related to the characteristic line or plane in the presence of sensor noise.

  15. Syndromic surveillance for health information system failures: a feasibility study

    PubMed Central

    Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico

    2013-01-01

    Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193

  16. Health management system for rocket engines

    NASA Technical Reports Server (NTRS)

    Nemeth, Edward

    1990-01-01

    The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.

  17. A Fault Tolerant System for an Integrated Avionics Sensor Configuration

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Lancraft, R. E.

    1984-01-01

    An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors.

  18. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    A high-angle-of-attack flush airdata sensing system was installed and flight tested on the F-18 High Alpha Research Vehicle at NASA-Dryden. This system uses a matrix of pressure orifices arranged in concentric circles on the nose of the vehicle to determine angles of attack, angles of sideslip, dynamic pressure, and static pressure as well as other airdata parameters. Results presented use an arrangement of 11 symmetrically distributed ports on the aircraft nose. Experience with this sensing system data indicates that the primary concern for real-time implementation is the detection and management of overall system and individual pressure sensor failures. The multiple port sensing system is more tolerant to small disturbances in the measured pressure data than conventional probe-based intrusive airdata systems. However, under adverse circumstances, large undetected failures in individual pressure ports can result in algorithm divergence and catastrophic failure of the entire system. How system and individual port failures may be detected using chi sq. analysis is shown. Once identified, the effects of failures are eliminated using weighted least squares.

  19. A dual-mode generalized likelihood ratio approach to self-reorganizing digital flight control system design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.

  20. Sensor failure and multivariable control for airbreathing propulsion systems. Ph.D. Thesis - Dec. 1979 Final Report

    NASA Technical Reports Server (NTRS)

    Behbehani, K.

    1980-01-01

    A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.

  1. SCADA alarms processing for wind turbine component failure detection

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Reder, M.; Melero, J. J.

    2016-09-01

    Wind turbine failure and downtime can often compromise the profitability of a wind farm due to their high impact on the operation and maintenance (O&M) costs. Early detection of failures can facilitate the changeover from corrective maintenance towards a predictive approach. This paper presents a cost-effective methodology to combine various alarm analysis techniques, using data from the Supervisory Control and Data Acquisition (SCADA) system, in order to detect component failures. The approach categorises the alarms according to a reviewed taxonomy, turning overwhelming data into valuable information to assess component status. Then, different alarms analysis techniques are applied for two purposes: the evaluation of the SCADA alarm system capability to detect failures, and the investigation of the relation between components faults being followed by failure occurrences in others. Various case studies are presented and discussed. The study highlights the relationship between faulty behaviour in different components and between failures and adverse environmental conditions.

  2. Advanced detection, isolation and accommodation of sensor failures: Real-time evaluation

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Bruton, William M.

    1987-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation (ADIA) Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines by using analytical redundacy to detect sensor failures. The results of a real time hybrid computer evaluation of the ADIA algorithm are presented. Minimum detectable levels of sensor failures for an F100 engine control system are determined. Also included are details about the microprocessor implementation of the algorithm as well as a description of the algorithm itself.

  3. On-Board Particulate Filter Failure Prevention and Failure Diagnostics Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alex; Ragaller, Paul; Herman, Andrew

    The increasing use of diesel and gasoline particulate filters requires advanced on-board diagnostics (OBD) to prevent and detect filter failures and malfunctions. Early detection of upstream (engine-out) malfunctions is paramount to preventing irreversible damage to downstream aftertreatment system components. Such early detection can mitigate the failure of the particulate filter resulting in the escape of emissions exceeding permissible limits and extend the component life. However, despite best efforts at early detection and filter failure prevention, the OBD system must also be able to detect filter failures when they occur. In this study, radio frequency (RF) sensors were used to directlymore » monitor the particulate filter state of health for both gasoline particulate filter (GPF) and diesel particulate filter (DPF) applications. The testing included controlled engine dynamometer evaluations, which characterized soot slip from various filter failure modes, as well as on-road fleet vehicle tests. The results show a high sensitivity to detect conditions resulting in soot leakage from the particulate filter, as well as potential for direct detection of structural failures including internal cracks and melted regions within the filter media itself. Furthermore, the measurements demonstrate, for the first time, the capability to employ a direct and continuous monitor of particulate filter diagnostics to both prevent and detect potential failure conditions in the field.« less

  4. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  5. Epidemic failure detection and consensus for extreme parallelism

    DOE PAGES

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...

    2017-02-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  6. Fault Detection and Isolation for Hydraulic Control

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Pressure sensors and isolation valves act to shut down defective servochannel. Redundant hydraulic system indirectly senses failure in any of its electrical control channels and mechanically isolates hydraulic channel controlled by faulty electrical channel so flat it cannot participate in operating system. With failure-detection and isolation technique, system can sustains two failed channels and still functions at full performance levels. Scheme useful on aircraft or other systems with hydraulic servovalves where failure cannot be tolerated.

  7. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  8. Turbofan engine demonstration of sensor failure detection

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood

    1991-01-01

    In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.

  9. Failure detection and identification for a reconfigurable flight control system

    NASA Technical Reports Server (NTRS)

    Dallery, Francois

    1987-01-01

    Failure detection and identification logic for a fault-tolerant longitudinal control system were investigated. Aircraft dynamics were based upon the cruise condition for a hypothetical transonic business jet transport configuration. The fault-tolerant control system consists of conventional control and estimation plus a new outer loop containing failure detection, identification, and reconfiguration (FDIR) logic. It is assumed that the additional logic has access to all measurements, as well as to the outputs of the control and estimation logic. The pilot may also command the FDIR logic to perform special tests.

  10. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for detecting abrupt changes (such as failures) in stochastic dynamical systems are surveyed. The class of linear systems is concentrated on but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  11. Evaluation of a fault tolerant system for an integrated avionics sensor configuration with TSRV flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1985-01-01

    The performance analysis results of a fault inferring nonlinear detection system (FINDS) using sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment is presented. First, a statistical analysis of the flight recorded sensor data was made in order to determine the characteristics of sensor inaccuracies. Next, modifications were made to the detection and decision functions in the FINDS algorithm in order to improve false alarm and failure detection performance under real modelling errors present in the flight data. Finally, the failure detection and false alarm performance of the FINDS algorithm were analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minute flight data. In general, the detection speed, failure level estimation, and false alarm performance showed a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed was faster for filter measurement sensors soon as MLS than for filter input sensors such as flight control accelerometers.

  12. EEMD-based wind turbine bearing failure detection using the generator stator current homopolar component

    NASA Astrophysics Data System (ADS)

    Amirat, Yassine; Choqueuse, Vincent; Benbouzid, Mohamed

    2013-12-01

    Failure detection has always been a demanding task in the electrical machines community; it has become more challenging in wind energy conversion systems because sustainability and viability of wind farms are highly dependent on the reduction of the operational and maintenance costs. Indeed the most efficient way of reducing these costs would be to continuously monitor the condition of these systems. This allows for early detection of the generator health degeneration, facilitating a proactive response, minimizing downtime, and maximizing productivity. This paper provides then an assessment of a failure detection techniques based on the homopolar component of the generator stator current and attempts to highlight the use of the ensemble empirical mode decomposition as a tool for failure detection in wind turbine generators for stationary and non-stationary cases.

  13. Integrated failure detection and management for the Space Station Freedom external active thermal control system

    NASA Technical Reports Server (NTRS)

    Mesloh, Nick; Hill, Tim; Kosyk, Kathy

    1993-01-01

    This paper presents the integrated approach toward failure detection, isolation, and recovery/reconfiguration to be used for the Space Station Freedom External Active Thermal Control System (EATCS). The on-board and on-ground diagnostic capabilities of the EATCS are discussed. Time and safety critical features, as well as noncritical failures, and the detection coverage for each provided by existing capabilities are reviewed. The allocation of responsibility between on-board software and ground-based systems, to be shown during ground testing at the Johnson Space Center, is described. Failure isolation capabilities allocated to the ground include some functionality originally found on orbit but moved to the ground to reduce on-board resource requirements. Complex failures requiring the analysis of multiple external variables, such as environmental conditions, heat loads, or station attitude, are also allocated to ground personnel.

  14. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for the detection of abrupt changes (such as failures) in stochastic dynamical systems were surveyed. The class of linear systems were emphasized, but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  15. Failure of the MicroScan WalkAway System To Detect Heteroresistance to Carbapenems in a Patient with Enterobacter aerogenes Bacteremia▿

    PubMed Central

    Gordon, N. C.; Wareham, D. W.

    2009-01-01

    We report the failure of the automated MicroScan WalkAway system to detect carbapenem heteroresistance in Enterobacter aerogenes. Carbapenem resistance has become an increasing concern in recent years, and robust surveillance is required to prevent dissemination of resistant strains. Reliance on automated systems may delay the detection of emerging resistance. PMID:19641071

  16. An intelligent control system for failure detection and controller reconfiguration

    NASA Technical Reports Server (NTRS)

    Biswas, Saroj K.

    1994-01-01

    We present an architecture of an intelligent restructurable control system to automatically detect failure of system components, assess its impact on system performance and safety, and reconfigure the controller for performance recovery. Fault detection is based on neural network associative memories and pattern classifiers, and is implemented using a multilayer feedforward network. Details of the fault detection network along with simulation results on health monitoring of a dc motor have been presented. Conceptual developments for fault assessment using an expert system and controller reconfiguration using a neural network are outlined.

  17. Detection of system failures in multi-axes tasks. [pilot monitored instrument approach

    NASA Technical Reports Server (NTRS)

    Ephrath, A. R.

    1975-01-01

    The effects of the pilot's participation mode in the control task on his workload level and failure detection performance were examined considering a low visibility landing approach. It is found that the participation mode had a strong effect on the pilot's workload, the induced workload being lowest when the pilot acted as a monitoring element during a coupled approach and highest when the pilot was an active element in the control loop. The effects of workload and participation mode on failure detection were separated. The participation mode was shown to have a dominant effect on the failure detection performance, with a failure in a monitored (coupled) axis being detected significantly faster than a comparable failure in a manually controlled axis.

  18. Failure detection system risk reduction assessment

    NASA Technical Reports Server (NTRS)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  19. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  20. Performance analysis of a fault inferring nonlinear detection system algorithm with integrated avionics flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1985-01-01

    This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.

  1. Real-time failure control (SAFD)

    NASA Technical Reports Server (NTRS)

    Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.

    1990-01-01

    The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  3. 46 CFR 161.002-8 - Automatic fire detecting systems, general requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... detecting system shall consist of a power supply; a control unit on which are located visible and audible... control unit. Power failure alarm devices may be separately housed from the control unit and may be combined with other power failure alarm systems when specifically approved. (b) [Reserved] [21 FR 9032, Nov...

  4. 46 CFR 161.002-8 - Automatic fire detecting systems, general requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... detecting system shall consist of a power supply; a control unit on which are located visible and audible... control unit. Power failure alarm devices may be separately housed from the control unit and may be combined with other power failure alarm systems when specifically approved. (b) [Reserved] [21 FR 9032, Nov...

  5. Expert systems for automated maintenance of a Mars oxygen production system

    NASA Technical Reports Server (NTRS)

    Ash, Robert L.; Huang, Jen-Kuang; Ho, Ming-Tsang

    1989-01-01

    A prototype expert system was developed for maintaining autonomous operation of a Mars oxygen production system. Normal operation conditions and failure modes according to certain desired criteria are tested and identified. Several schemes for failure detection and isolation using forward chaining, backward chaining, knowledge-based and rule-based are devised to perform several housekeeping functions. These functions include self-health checkout, an emergency shut down program, fault detection and conventional control activities. An effort was made to derive the dynamic model of the system using Bond-Graph technique in order to develop the model-based failure detection and isolation scheme by estimation method. Finally, computer simulations and experimental results demonstrated the feasibility of the expert system and a preliminary reliability analysis for the oxygen production system is also provided.

  6. Bearing system

    DOEpatents

    Kapich, Davorin D.

    1987-01-01

    A bearing system includes backup bearings for supporting a rotating shaft upon failure of primary bearings. In the preferred embodiment, the backup bearings are rolling element bearings having their rolling elements disposed out of contact with their associated respective inner races during normal functioning of the primary bearings. Displacement detection sensors are provided for detecting displacement of the shaft upon failure of the primary bearings. Upon detection of the failure of the primary bearings, the rolling elements and inner races of the backup bearings are brought into mutual contact by axial displacement of the shaft.

  7. Autonomous Component Health Management with Failed Component Detection, Identification, and Avoidance

    NASA Technical Reports Server (NTRS)

    Davis, Robert N.; Polites, Michael E.; Trevino, Luis C.

    2004-01-01

    This paper details a novel scheme for autonomous component health management (ACHM) with failed actuator detection and failed sensor detection, identification, and avoidance. This new scheme has features that far exceed the performance of systems with triple-redundant sensing and voting, yet requires fewer sensors and could be applied to any system with redundant sensing. Relevant background to the ACHM scheme is provided, and the simulation results for the application of that scheme to a single-axis spacecraft attitude control system with a 3rd order plant and dual-redundant measurement of system states are presented. ACHM fulfills key functions needed by an integrated vehicle health monitoring (IVHM) system. It is: autonomous; adaptive; works in realtime; provides optimal state estimation; identifies failed components; avoids failed components; reconfigures for multiple failures; reconfigures for intermittent failures; works for hard-over, soft, and zero-output failures; and works for both open- and closed-loop systems. The ACHM scheme combines a prefilter that generates preliminary state estimates, detects and identifies failed sensors and actuators, and avoids the use of failed sensors in state estimation with a fixed-gain Kalman filter that generates optimal state estimates and provides model-based state estimates that comprise an integral part of the failure detection logic. The results show that ACHM successfully isolates multiple persistent and intermittent hard-over, soft, and zero-output failures. It is now ready to be tested on a computer model of an actual system.

  8. Fault detection and identification in missile system guidance and control: a filtering approach

    NASA Astrophysics Data System (ADS)

    Padgett, Mary Lou; Evers, Johnny; Karplus, Walter J.

    1996-03-01

    Real-world applications of computational intelligence can enhance the fault detection and identification capabilities of a missile guidance and control system. A simulation of a bank-to- turn missile demonstrates that actuator failure may cause the missile to roll and miss the target. Failure of one fin actuator can be detected using a filter and depicting the filter output as fuzzy numbers. The properties and limitations of artificial neural networks fed by these fuzzy numbers are explored. A suite of networks is constructed to (1) detect a fault and (2) determine which fin (if any) failed. Both the zero order moment term and the fin rate term show changes during actuator failure. Simulations address the following questions: (1) How bad does the actuator failure have to be for detection to occur, (2) How bad does the actuator failure have to be for fault detection and isolation to occur, (3) are both zero order moment and fine rate terms needed. A suite of target trajectories are simulated, and properties and limitations of the approach reported. In some cases, detection of the failed actuator occurs within 0.1 second, and isolation of the failure occurs 0.1 after that. Suggestions for further research are offered.

  9. An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks

    PubMed Central

    Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei

    2014-01-01

    The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005

  10. Investigation of the cross-ship comparison monitoring method of failure detection in the HIMAT RPRV. [digital control techniques using airborne microprocessors

    NASA Technical Reports Server (NTRS)

    Wolf, J. A.

    1978-01-01

    The Highly maneuverable aircraft technology (HIMAT) remotely piloted research vehicle (RPRV) uses cross-ship comparison monitoring of the actuator RAM positions to detect a failure in the aileron, canard, and elevator control surface servosystems. Some possible sources of nuisance trips for this failure detection technique are analyzed. A FORTRAN model of the simplex servosystems and the failure detection technique were utilized to provide a convenient means of changing parameters and introducing system noise. The sensitivity of the technique to differences between servosystems and operating conditions was determined. The cross-ship comparison monitoring method presently appears to be marginal in its capability to detect an actual failure and to withstand nuisance trips.

  11. Flight test results of the strapdown ring laser gyro tetrad inertial navigation system

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.

    1983-01-01

    A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.

  12. Inductive System Monitors Tasks

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Inductive Monitoring System (IMS) software developed at Ames Research Center uses artificial intelligence and data mining techniques to build system-monitoring knowledge bases from archived or simulated sensor data. This information is then used to detect unusual or anomalous behavior that may indicate an impending system failure. Currently helping analyze data from systems that help fly and maintain the space shuttle and the International Space Station (ISS), the IMS has also been employed by data classes are then used to build a monitoring knowledge base. In real time, IMS performs monitoring functions: determining and displaying the degree of deviation from nominal performance. IMS trend analyses can detect conditions that may indicate a failure or required system maintenance. The development of IMS was motivated by the difficulty of producing detailed diagnostic models of some system components due to complexity or unavailability of design information. Successful applications have ranged from real-time monitoring of aircraft engine and control systems to anomaly detection in space shuttle and ISS data. IMS was used on shuttle missions STS-121, STS-115, and STS-116 to search the Wing Leading Edge Impact Detection System (WLEIDS) data for signs of possible damaging impacts during launch. It independently verified findings of the WLEIDS Mission Evaluation Room (MER) analysts and indicated additional points of interest that were subsequently investigated by the MER team. In support of the Exploration Systems Mission Directorate, IMS is being deployed as an anomaly detection tool on ISS mission control consoles in the Johnson Space Center Mission Operations Directorate. IMS has been trained to detect faults in the ISS Control Moment Gyroscope (CMG) systems. In laboratory tests, it has already detected several minor anomalies in real-time CMG data. When tested on archived data, IMS was able to detect precursors of the CMG1 failure nearly 15 hours in advance of the actual failure event. In the Aeronautics Research Mission Directorate, IMS successfully performed real-time engine health analysis. IMS was able to detect simulated failures and actual engine anomalies in an F/A-18 aircraft during the course of 25 test flights. IMS is also being used in colla

  13. 40 CFR 63.164 - Standards: Compressors.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an... indicates failure of the seal system, the barrier fluid system, or both. (f) If the sensor indicates failure...

  14. 40 CFR 63.164 - Standards: Compressors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an... indicates failure of the seal system, the barrier fluid system, or both. (f) If the sensor indicates failure...

  15. A solenoid failure detection system for cold gas attitude control jet valves

    NASA Technical Reports Server (NTRS)

    Johnston, P. A.

    1970-01-01

    The development of a solenoid valve failure detection system is described. The technique requires the addition of a radioactive gas to the propellant of a cold gas jet attitude control system. Solenoid failure is detected with an avalanche radiation detector located in the jet nozzle which senses the radiation emitted by the leaking radioactive gas. Measurements of carbon monoxide leakage rates through a Mariner type solenoid valve are presented as a function of gas activity and detector configuration. A cylindrical avalanche detector with a factor of 40 improvement in leak sensitivity is proposed for flight systems because it allows the quantity of radioactive gas that must be added to the propellant to be reduced to a practical level.

  16. Triplexer Monitor Design for Failure Detection in FTTH System

    NASA Astrophysics Data System (ADS)

    Fu, Minglei; Le, Zichun; Hu, Jinhua; Fei, Xia

    2012-09-01

    Triplexer was one of the key components in FTTH systems, which employed an analog overlay channel for video broadcasting in addition to bidirectional digital transmission. To enhance the survivability of triplexer as well as the robustness of FTTH system, a multi-ports device named triplexer monitor was designed and realized, by which failures at triplexer ports can be detected and localized. Triplexer monitor was composed of integrated circuits and its four input ports were connected with the beam splitter whose power division ratio was 95∶5. By means of detecting the sampled optical signal from the beam splitters, triplexer monitor tracked the status of the four ports in triplexer (e.g. 1310 nm, 1490 nm, 1550 nm and com ports). In this paper, the operation scenario of the triplexer monitor with external optical devices was addressed. And the integrated circuit structure of the triplexer monitor was also given. Furthermore, a failure localization algorithm was proposed, which based on the state transition diagram. In order to measure the failure detection and localization time under the circumstance of different failed ports, an experimental test-bed was built. Experiment results showed that the detection time for the failure at 1310 nm port by the triplexer monitor was less than 8.20 ms. For the failure at 1490 nm or 1550 nm port it was less than 8.20 ms and for the failure at com port it was less than 7.20 ms.

  17. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1992-01-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  18. A Review of Transmission Diagnostics Research at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Zakajsek, James J.

    1994-01-01

    This paper presents a summary of the transmission diagnostics research work conducted at NASA Lewis Research Center over the last four years. In 1990, the Transmission Health and Usage Monitoring Research Team at NASA Lewis conducted a survey to determine the critical needs of the diagnostics community. Survey results indicated that experimental verification of gear and bearing fault detection methods, improved fault detection in planetary systems, and damage magnitude assessment and prognostics research were all critical to a highly reliable health and usage monitoring system. In response to this, a variety of transmission fault detection methods were applied to experimentally obtained fatigue data. Failure modes of the fatigue data include a variety of gear pitting failures, tooth wear, tooth fracture, and bearing spalling failures. Overall results indicate that, of the gear fault detection techniques, no one method can successfully detect all possible failure modes. The more successful methods need to be integrated into a single more reliable detection technique. A recently developed method, NA4, in addition to being one of the more successful gear fault detection methods, was also found to exhibit damage magnitude estimation capabilities.

  19. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  20. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  1. Spacecraft dynamics characterization and control system failure detection. Volume 3: Control system failure monitoring

    NASA Technical Reports Server (NTRS)

    Vanschalkwyk, Christiaan M.

    1992-01-01

    We discuss the application of Generalized Parity Relations to two experimental flexible space structures, the NASA Langley Mini-Mast and Marshall Space Flight Center ACES mast. We concentrate on the generation of residuals and make no attempt to implement the Decision Function. It should be clear from the examples that are presented whether it would be possible to detect the failure of a specific component. We derive the equations from Generalized Parity Relations. Two special cases are treated: namely, Single Sensor Parity Relations (SSPR) and Double Sensor Parity Relations (DSPR). Generalized Parity Relations for actuators are also derived. The NASA Langley Mini-Mast and the application of SSPR and DSPR to a set of displacement sensors located at the tip of the Mini-Mast are discussed. The performance of a reduced order model that includes the first five models of the mast is compared to a set of parity relations that was identified on a set of input-output data. Both time domain and frequency domain comparisons are made. The effect of the sampling period and model order on the performance of the Residual Generators are also discussed. Failure detection experiments where the sensor set consisted of two gyros and an accelerometer are presented. The effects of model order and sampling frequency are again illustrated. The detection of actuator failures is discussed. We use Generalized Parity Relations to monitor control system component failures on the ACES mast. An overview is given of the Failure Detection Filter and experimental results are discussed. Conclusions and directions for future research are given.

  2. FINDS: A fault inferring nonlinear detection system programmers manual, version 3.0

    NASA Technical Reports Server (NTRS)

    Lancraft, R. E.

    1985-01-01

    Detailed software documentation of the digital computer program FINDS (Fault Inferring Nonlinear Detection System) Version 3.0 is provided. FINDS is a highly modular and extensible computer program designed to monitor and detect sensor failures, while at the same time providing reliable state estimates. In this version of the program the FINDS methodology is used to detect, isolate, and compensate for failures in simulated avionics sensors used by the Advanced Transport Operating Systems (ATOPS) Transport System Research Vehicle (TSRV) in a Microwave Landing System (MLS) environment. It is intended that this report serve as a programmers guide to aid in the maintenance, modification, and revision of the FINDS software.

  3. Failure detection and identification

    NASA Technical Reports Server (NTRS)

    Massoumnia, Mohammad-Ali; Verghese, George C.; Willsky, Alan S.

    1989-01-01

    Using the geometric concept of an unobservability subspace, a solution is given to the problem of detecting and identifying control system component failures in linear, time-invariant systems. Conditions are developed for the existence of a causal, linear, time-invariant processor that can detect and uniquely identify a component failure, first for the case where components can fail simultaneously, and then for the case where they fail only one at a time. Explicit design algorithms are provided when these conditions are satisfied. In addition to time-domain solvability conditions, frequency-domain interpretations of the results are given, and connections are drawn with results already available in the literature.

  4. Data-Driven Anomaly Detection Performance for the Ares I-X Ground Diagnostic Prototype

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Schwabacher, Mark A.; Matthews, Bryan L.

    2010-01-01

    In this paper, we will assess the performance of a data-driven anomaly detection algorithm, the Inductive Monitoring System (IMS), which can be used to detect simulated Thrust Vector Control (TVC) system failures. However, the ability of IMS to detect these failures in a true operational setting may be related to the realistic nature of how they are simulated. As such, we will investigate both a low fidelity and high fidelity approach to simulating such failures, with the latter based upon the underlying physics. Furthermore, the ability of IMS to detect anomalies that were previously unknown and not previously simulated will be studied in earnest, as well as apparent deficiencies or misapplications that result from using the data-driven paradigm. Our conclusions indicate that robust detection performance of simulated failures using IMS is not appreciably affected by the use of a high fidelity simulation. However, we have found that the inclusion of a data-driven algorithm such as IMS into a suite of deployable health management technologies does add significant value.

  5. Sensor failure detection system. [for the F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Beattie, E. C.; Laprad, R. F.; Mcglone, M. E.; Rock, S. M.; Akhter, M. M.

    1981-01-01

    Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications.

  6. Redundancy management of inertial systems.

    NASA Technical Reports Server (NTRS)

    Mckern, R. A.; Musoff, H.

    1973-01-01

    The paper reviews developments in failure detection and isolation techniques applicable to gimballed and strapdown systems. It examines basic redundancy management goals of improved reliability, performance and logistic costs, and explores mechanizations available for both input and output data handling. The meaning of redundant system reliability in terms of available coverage, system MTBF, and mission time is presented and the practical hardware performance limitations of failure detection and isolation techniques are explored. Simulation results are presented illustrating implementation coverages attainable considering IMU performance models and mission detection threshold requirements. The implications of a complete GN&C redundancy management method on inertial techniques are also explored.

  7. A preliminary design for flight testing the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.

    1986-01-01

    This report presents a preliminary design for flight testing the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a target flight computer. The FINDS software was ported onto the target flight computer by reducing the code size by 65%. Several modifications were made to the computational algorithms resulting in a near real-time execution speed. Finally, a new failure detection strategy was developed resulting in a significant improvement in the detection time performance. In particular, low level MLS, IMU and IAS sensor failures are detected instantaneously with the new detection strategy, while accelerometer and the rate gyro failures are detected within the minimum time allowed by the information generated in the sensor residuals based on the point mass equations of motion. All of the results have been demonstrated by using five minutes of sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment.

  8. Failure detection and isolation analysis of a redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Motyka, P.; Landey, M.; Mckern, R.

    1981-01-01

    The objective of this study was to define and develop techniques for failure detection and isolation (FDI) algorithms for a dual fail/operational redundant strapdown inertial navigation system are defined and developed. The FDI techniques chosen include provisions for hard and soft failure detection in the context of flight control and navigation. Analyses were done to determine error detection and switching levels for the inertial navigation system, which is intended for a conventional takeoff or landing (CTOL) operating environment. In addition, investigations of false alarms and missed alarms were included for the FDI techniques developed, along with the analyses of filters to be used in conjunction with FDI processing. Two specific FDI algorithms were compared: the generalized likelihood test and the edge vector test. A deterministic digital computer simulation was used to compare and evaluate the algorithms and FDI systems.

  9. Optimally robust redundancy relations for failure detection in uncertain systems

    NASA Technical Reports Server (NTRS)

    Lou, X.-C.; Willsky, A. S.; Verghese, G. C.

    1986-01-01

    All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.

  10. Preliminary input to the space shuttle reaction control subsystem failure detection and identification software requirements (uncontrolled)

    NASA Technical Reports Server (NTRS)

    Bergmann, E.

    1976-01-01

    The current baseline method and software implementation of the space shuttle reaction control subsystem failure detection and identification (RCS FDI) system is presented. This algorithm is recommended for conclusion in the redundancy management (RM) module of the space shuttle guidance, navigation, and control system. Supporting software is presented, and recommended for inclusion in the system management (SM) and display and control (D&C) systems. RCS FDI uses data from sensors in the jets, in the manifold isolation valves, and in the RCS fuel and oxidizer storage tanks. A list of jet failures and fuel imbalance warnings is generated for use by the jet selection algorithm of the on-orbit and entry flight control systems, and to inform the crew and ground controllers of RCS failure status. Manifold isolation valve close commands are generated in the event of failed on or leaking jets to prevent loss of large quantities of RCS fuel.

  11. Failure detection and isolation investigation for strapdown skew redundant tetrad laser gyro inertial sensor arrays

    NASA Technical Reports Server (NTRS)

    Eberlein, A. J.; Lahm, T. G.

    1976-01-01

    The degree to which flight-critical failures in a strapdown laser gyro tetrad sensor assembly can be isolated in short-haul aircraft after a failure occurrence has been detected by the skewed sensor failure-detection voting logic is investigated along with the degree to which a failure in the tetrad computer can be detected and isolated at the computer level, assuming a dual-redundant computer configuration. The tetrad system was mechanized with two two-axis inertial navigation channels (INCs), each containing two gyro/accelerometer axes, computer, control circuitry, and input/output circuitry. Gyro/accelerometer data is crossfed between the two INCs to enable each computer to independently perform the navigation task. Computer calculations are synchronized between the computers so that calculated quantities are identical and may be compared. Fail-safe performance (identification of the first failure) is accomplished with a probability approaching 100 percent of the time, while fail-operational performance (identification and isolation of the first failure) is achieved 93 to 96 percent of the time.

  12. 40 CFR 63.1012 - Compressor standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... fluid system degassing reservoir that is routed to a process or fuel gas system or connected by a closed... sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be... the seal system, the barrier fluid system, or both. If the sensor indicates failure of the seal system...

  13. 40 CFR 63.1012 - Compressor standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... fluid system degassing reservoir that is routed to a process or fuel gas system or connected by a closed... sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be... the seal system, the barrier fluid system, or both. If the sensor indicates failure of the seal system...

  14. On-line detection of key radionuclides for fuel-rod failure in a pressurized water reactor.

    PubMed

    Qin, Guoxiu; Chen, Xilin; Guo, Xiaoqing; Ni, Ning

    2016-08-01

    For early on-line detection of fuel rod failure, the key radionuclides useful in monitoring must leak easily from failing rods. Yield, half-life, and mass share of fission products that enter the primary coolant also need to be considered in on-line analyses. From all the nuclides that enter the primary coolant during fuel-rod failure, (135)Xe and (88)Kr were ultimately chosen as crucial for on-line monitoring of fuel-rod failure. A monitoring system for fuel-rod failure detection for pressurized water reactor (PWR) based on the LaBr3(Ce) detector was assembled and tested. The samples of coolant from the PWR were measured using the system as well as a HPGe γ-ray spectrometer. A comparison showed the method was feasible. Finally, the γ-ray spectra of primary coolant were measured under normal operations and during fuel-rod failure. The two peaks of (135)Xe (249.8keV) and (88)Kr (2392.1keV) were visible, confirming that the method is capable of monitoring fuel-rod failure on-line. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Immunity-based detection, identification, and evaluation of aircraft sub-system failures

    NASA Astrophysics Data System (ADS)

    Moncayo, Hever Y.

    This thesis describes the design, development, and flight-simulation testing of an integrated Artificial Immune System (AIS) for detection, identification, and evaluation of a wide variety of sensor, actuator, propulsion, and structural failures/damages including the prediction of the achievable states and other limitations on performance and handling qualities. The AIS scheme achieves high detection rate and low number of false alarms for all the failure categories considered. Data collected using a motion-based flight simulator are used to define the self for an extended sub-region of the flight envelope. The NASA IFCS F-15 research aircraft model is used and represents a supersonic fighter which include model following adaptive control laws based on non-linear dynamic inversion and artificial neural network augmentation. The flight simulation tests are designed to analyze and demonstrate the performance of the immunity-based aircraft failure detection, identification and evaluation (FDIE) scheme. A general robustness analysis is also presented by determining the achievable limits for a desired performance in the presence of atmospheric perturbations. For the purpose of this work, the integrated AIS scheme is implemented based on three main components. The first component performs the detection when one of the considered failures is present in the system. The second component consists in the identification of the failure category and the classification according to the failed element. During the third phase a general evaluation of the failure is performed with the estimation of the magnitude/severity of the failure and the prediction of its effect on reducing the flight envelope of the aircraft system. Solutions and alternatives to specific design issues of the AIS scheme, such as data clustering and empty space optimization, data fusion and duplication removal, definition of features, dimensionality reduction, and selection of cluster/detector shape are also analyzed in this thesis. They showed to have an important effect on detection performance and are a critical aspect when designing the configuration of the AIS. The results presented in this thesis show that the AIS paradigm addresses directly the complexity and multi-dimensionality associated with a damaged aircraft dynamic response and provides the tools necessary for a comprehensive/integrated solution to the FDIE problem. Excellent detection, identification, and evaluation performance has been recorded for all types of failures considered. The implementation of the proposed AIS-based scheme can potentially have a significant impact on the safety of aircraft operation. The output information obtained from the scheme will be useful to increase pilot situational awareness and determine automated compensation.

  16. Toward Failure Modeling In Complex Dynamic Systems: Impact of Design and Manufacturing Variations

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; McAdams, Daniel A.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes during a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the. modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle vibration monitoring systems.

  17. Failure detection and recovery in the assembly/contingency subsystem

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1993-01-01

    The Assembly/Contingency Subsystem (ACS) is the primary communications link on board the Space Station. Any failure in a component of this system or in the external devices through which it communicates with ground-based systems will isolate the Station. The ACS software design includes a failure management capability (ACFM) that provides protocols for failure detection, isolation, and recovery (FDIR). The the ACFM design requirements as outlined in the current ACS software requirements specification document are reviewed. The activities carried out in this review include: (1) an informal, but thorough, end-to-end failure mode and effects analysis of the proposed software architecture for the ACFM; and (2) a prototype of the ACFM software, implemented as a C program under the UNIX operating system. The purpose of this review is to evaluate the FDIR protocols specified in the ACS design and the specifications themselves in light of their use in implementing the ACFM. The basis of failure detection in the ACFM is the loss of signal between the ground and the Station, which (under the appropriate circumstances) will initiate recovery to restore communications. This recovery involves the reconfiguration of the ACS to either a backup set of components or to a degraded communications mode. The initiation of recovery depends largely on the criticality of the failure mode, which is defined by tables in the ACFM and can be modified to provide a measure of flexibility in recovery procedures.

  18. Simulating fail-stop in asynchronous distributed systems

    NASA Technical Reports Server (NTRS)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  19. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  20. 40 CFR 65.112 - Standards: Compressors.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... barrier fluid system shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be observed daily or shall be equipped with an alarm unless the... criterion that indicates failure of the seal system, the barrier fluid system, or both. If the sensor...

  1. 40 CFR 63.1031 - Compressors standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... service. Each barrier fluid system shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be observed daily or shall be equipped with an... both. If the sensor indicates failure of the seal system, the barrier fluid system, or both based on...

  2. 40 CFR 61.242-3 - Standards: Compressors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... paragraphs (a)-(c) of this section shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section... system, or both. (f) If the sensor indicates failure of the seal system, the barrier fluid system, or...

  3. Distributed multi-level supervision to effectively monitor the operations of a fleet of autonomous vehicles in agricultural tasks.

    PubMed

    Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela

    2015-03-05

    This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations.

  4. Distributed Multi-Level Supervision to Effectively Monitor the Operations of a Fleet of Autonomous Vehicles in Agricultural Tasks

    PubMed Central

    Conesa-Muñoz, Jesús; Gonzalez-de-Soto, Mariano; Gonzalez-de-Santos, Pablo; Ribeiro, Angela

    2015-01-01

    This paper describes a supervisor system for monitoring the operation of automated agricultural vehicles. The system analyses all of the information provided by the sensors and subsystems on the vehicles in real time and notifies the user when a failure or potentially dangerous situation is detected. In some situations, it is even able to execute a neutralising protocol to remedy the failure. The system is based on a distributed and multi-level architecture that divides the supervision into different subsystems, allowing for better management of the detection and repair of failures. The proposed supervision system was developed to perform well in several scenarios, such as spraying canopy treatments against insects and diseases and selective weed treatments, by either spraying herbicide or burning pests with a mechanical-thermal actuator. Results are presented for selective weed treatment by the spraying of herbicide. The system successfully supervised the task; it detected failures such as service disruptions, incorrect working speeds, incorrect implement states, and potential collisions. Moreover, the system was able to prevent collisions between vehicles by taking action to avoid intersecting trajectories. The results show that the proposed system is a highly useful tool for managing fleets of autonomous vehicles. In particular, it can be used to manage agricultural vehicles during treatment operations. PMID:25751079

  5. A systems engineering approach to automated failure cause diagnosis in space power systems

    NASA Technical Reports Server (NTRS)

    Dolce, James L.; Faymon, Karl A.

    1987-01-01

    Automatic failure-cause diagnosis is a key element in autonomous operation of space power systems such as Space Station's. A rule-based diagnostic system has been developed for determining the cause of degraded performance. The knowledge required for such diagnosis is elicited from the system engineering process by using traditional failure analysis techniques. Symptoms, failures, causes, and detector information are represented with structured data; and diagnostic procedural knowledge is represented with rules. Detected symptoms instantiate failure modes and possible causes consistent with currently held beliefs about the likelihood of the cause. A diagnosis concludes with an explanation of the observed symptoms in terms of a chain of possible causes and subcauses.

  6. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    PubMed

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  7. 40 CFR 65.107 - Standards: Pumps in light liquid service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... frequency of drips and to the sensor that indicates failure of the seal system, the barrier fluid system, or... or fuel gas system or connected by a closed vent system to a control device that complies with the... equipped with a sensor that will detect failure of the seal system, the barrier fluid system, or both. (v...

  8. Nonparametric method for failures detection and localization in the actuating subsystem of aircraft control system

    NASA Astrophysics Data System (ADS)

    Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.

  9. Sensor failure detection for jet engines

    NASA Technical Reports Server (NTRS)

    Beattie, E. C.; Laprad, R. F.; Akhter, M. M.; Rock, S. M.

    1983-01-01

    Revisions to the advanced sensor failure detection, isolation, and accommodation (DIA) algorithm, developed under the sensor failure detection system program were studied to eliminate the steady state errors due to estimation filter biases. Three algorithm revisions were formulated and one revision for detailed evaluation was chosen. The selected version modifies the DIA algorithm to feedback the actual sensor outputs to the integral portion of the control for the nofailure case. In case of a failure, the estimates of the failed sensor output is fed back to the integral portion. The estimator outputs are fed back to the linear regulator portion of the control all the time. The revised algorithm is evaluated and compared to the baseline algorithm developed previously.

  10. 40 CFR 63.1007 - Pumps in light liquid service standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sensor that indicates failure of the seal system, the barrier fluid system, or both. The owner or... reservoir that is routed to a process or fuel gas system or connected by a closed vent system to a control... liquid service. (iv) Each barrier fluid system is equipped with a sensor that will detect failure of the...

  11. 40 CFR 63.1007 - Pumps in light liquid service standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sensor that indicates failure of the seal system, the barrier fluid system, or both. The owner or... reservoir that is routed to a process or fuel gas system or connected by a closed vent system to a control... liquid service. (iv) Each barrier fluid system is equipped with a sensor that will detect failure of the...

  12. 40 CFR 63.1026 - Pumps in light liquid service standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... presence and frequency of drips and to the sensor that indicates failure of the seal system, the barrier... or fuel gas system or connected by a closed-vent system to a control device that complies with the.... (iv) Each barrier fluid system is equipped with a sensor that will detect failure of the seal system...

  13. Quality Issues in Propulsion

    NASA Technical Reports Server (NTRS)

    McCarty, John P.; Lyles, Garry M.

    1997-01-01

    Propulsion system quality is defined in this paper as having high reliability, that is, quality is a high probability of within-tolerance performance or operation. Since failures are out-of-tolerance performance, the probability of failures and their occurrence is the difference between high and low quality systems. Failures can be described at 3 levels: the system failure (which is the detectable end of a failure), the failure mode (which is the failure process), and the failure cause (which is the start). Failure causes can be evaluated & classified by type. The results of typing flight history failures shows that most failures are in unrecognized modes and result from human error or noise, i.e. failures are when engineers learn how things really work. Although the study based on US launch vehicles, a sampling of failures from other countries indicates the finding has broad application. The parameters of the design of a propulsion system are not single valued, but have dispersions associated with the manufacturing of parts. Many tests are needed to find failures, if the dispersions are large relative to tolerances, which could contribute to the large number of failures in unrecognized modes.

  14. Quality control of inkjet technology for DNA microarray fabrication.

    PubMed

    Pierik, Anke; Dijksman, Frits; Raaijmakers, Adrie; Wismans, Ton; Stapert, Henk

    2008-12-01

    A robust manufacturing process is essential to make high-quality DNA microarrays, especially for use in diagnostic tests. We investigated different failure modes of the inkjet printing process used to manufacture low-density microarrays. A single nozzle inkjet spotter was provided with two optical imaging systems, monitoring in real time the flight path of every droplet. If a droplet emission failure is detected, the printing process is automatically stopped. We analyzed over 1.3 million droplets. This information was used to investigate the performance of the inkjet system and to obtain detailed insight into the frequency and causes of jetting failures. Of all the substrates investigated, 96.2% were produced without any system or jetting failures. In 1.6% of the substrates, droplet emission failed and was correctly identified. Appropriate measures could then be taken to get the process back on track. In 2.2%, the imaging systems failed while droplet emission occurred correctly. In 0.1% of the substrates, droplet emission failure that was not timely detected occurred. Thus, the overall yield of the microarray manufacturing process was 99.9%, which is highly acceptable for prototyping.

  15. A dual-processor multi-frequency implementation of the FINDS algorithm

    NASA Technical Reports Server (NTRS)

    Godiwala, Pankaj M.; Caglayan, Alper K.

    1987-01-01

    This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.

  16. Reducing unscheduled plant maintenance delays -- Field test of a new method to predict electric motor failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homce, G.T.; Thalimer, J.R.

    1996-05-01

    Most electric motor predictive maintenance methods have drawbacks that limit their effectiveness in the mining environment. The US Bureau of Miens (USBM) is developing an alternative approach to detect winding insulation breakdown in advance of complete motor failure. In order to evaluate the analysis algorithms necessary for this approach, the USBM has designed and installed a system to monitor 120 electric motors in a coal preparation plant. The computer-based experimental system continuously gathers, stores, and analyzes electrical parameters for each motor. The results are then correlated to data from conventional motor-maintenance methods and in-service failures to determine if the analysismore » algorithms can detect signs of insulation deterioration and impending failure. This paper explains the on-line testing approach used in this research, and describes monitoring system design and implementation. At this writing data analysis is underway, but conclusive results are not yet available.« less

  17. 40 CFR 264.1101 - Design and operating standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... hazardous waste (e.g., upon detection of leakage from the primary barrier) the owner or operator must: (A... constituents into the barrier, and a leak detection system that is capable of detecting failure of the primary... requirements of the leak detection component of the secondary containment system are satisfied by installation...

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less

  19. A review of wiring system safety in space power systems

    NASA Technical Reports Server (NTRS)

    Stavnes, Mark W.; Hammoud, Ahmad N.

    1993-01-01

    Wiring system failures have resulted from arc propagation in the wiring harnesses of current aerospace vehicles. These failures occur when the insulation becomes conductive upon the initiation of an arc. In some cases, the conductive path of the carbon arc track displays a high enough resistance such that the current is limited, and therefore may be difficult to detect using conventional circuit protection. Often, such wiring failures are not simply the result of insulation failure, but are due to a combination of wiring system factors. Inadequate circuit protection, unforgiving system designs, and careless maintenance procedures can contribute to a wiring system failure. This paper approaches the problem with respect to the overall wiring system, in order to determine what steps can be taken to improve the reliability, maintainability, and safety of space power systems. Power system technologies, system designs, and maintenance procedures which have led to past wiring system failures will be discussed. New technologies, design processes, and management techniques which may lead to improved wiring system safety will be introduced.

  20. 40 CFR 60.482-3a - Standards: Compressors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... both. (f) If the sensor indicates failure of the seal system, the barrier system, or both based on the...

  1. Evaluation of MEMS-Based Wireless Accelerometer Sensors in Detecting Gear Tooth Faults in Helicopter Transmissions

    NASA Technical Reports Server (NTRS)

    Lewicki, David George; Lambert, Nicholas A.; Wagoner, Robert S.

    2015-01-01

    The diagnostics capability of micro-electro-mechanical systems (MEMS) based rotating accelerometer sensors in detecting gear tooth crack failures in helicopter main-rotor transmissions was evaluated. MEMS sensors were installed on a pre-notched OH-58C spiral-bevel pinion gear. Endurance tests were performed and the gear was run to tooth fracture failure. Results from the MEMS sensor were compared to conventional accelerometers mounted on the transmission housing. Most of the four stationary accelerometers mounted on the gear box housing and most of the CI's used gave indications of failure at the end of the test. The MEMS system performed well and lasted the entire test. All MEMS accelerometers gave an indication of failure at the end of the test. The MEMS systems performed as well, if not better, than the stationary accelerometers mounted on the gear box housing with regards to gear tooth fault detection. For both the MEMS sensors and stationary sensors, the fault detection time was not much sooner than the actual tooth fracture time. The MEMS sensor spectrum data showed large first order shaft frequency sidebands due to the measurement rotating frame of reference. The method of constructing a pseudo tach signal from periodic characteristics of the vibration data was successful in deriving a TSA signal without an actual tach and proved as an effective way to improve fault detection for the MEMS.

  2. Immunity-Based Aircraft Fault Detection System

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.

  3. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1982-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, a plasma, an atomic absorption, and an emission spectrometer to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations ( 2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure.

  4. Failure Control Techniques for the SSME

    NASA Technical Reports Server (NTRS)

    Taniguchi, M. H.

    1987-01-01

    Since ground testing of the Space Shuttle Main Engine (SSME) began in 1975, the detection of engine anomalies and the prevention of major damage have been achieved by a multi-faceted detection/shutdown system. The system continues the monitoring task today and consists of the following: sensors, automatic redline and other limit logic, redundant sensors and controller voting logic, conditional decision logic, and human monitoring. Typically, on the order of 300 to 500 measurements are sensed and recorded for each test, while on the order of 100 are used for control and monitoring. Despite extensive monitoring by the current detection system, twenty-seven (27) major incidents have occurred. This number would appear insignificant compared with over 1200 hot-fire tests which have taken place since 1976. However, the number suggests the requirement for and future benefits of a more advanced failure detection system.

  5. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.; Johnson, Stephen B.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to provide formal connectivity between the nominal (SE), and off-nominal (SHM and FM) aspects of functions and designs. This paper describes a formal modeling approach to the initial phases of the development process that integrates the nominal and off-nominal perspectives in a model that unites SE goals and functions of with the failure to achieve goals and functions (SHM/FM). This methodology and corresponding model, known as a Goal-Function Tree (GFT), provides a means to represent, decompose, and elaborate system goals and functions in a rigorous manner that connects directly to design through use of state variables that translate natural language requirements and goals into logical-physical state language. The state variable-based approach also provides the means to directly connect FM to the design, by specifying the range in which state variables must be controlled to achieve goals, and conversely, the failures that exist if system behavior go out-of-range. This in turn allows for the systems engineers and SHM/FM engineers to determine which state variables to monitor, and what action(s) to take should the system fail to achieve that goal. In sum, the GFT representation provides a unified approach to early-phase SE and FM development. This representation and methodology has been successfully developed and implemented using Systems Modeling Language (SysML) on the NASA Space Launch System (SLS) Program. It enabled early design trade studies of failure detection coverage to ensure complete detection coverage of all crew-threatening failures. The representation maps directly both to FM algorithm designs, and to failure scenario definitions needed for design analysis and testing. The GFT representation provided the basis for mapping of abort triggers into scenarios, both needed for initial, and successful quantitative analyses of abort effectiveness (detection and response to crew-threatening events).

  6. Infrared thermography based diagnosis of inter-turn fault and cooling system failure in three phase induction motor

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Naikan, V. N. A.

    2017-12-01

    Thermography has been widely used as a technique for anomaly detection in induction motors. International Electrical Testing Association (NETA) proposed guidelines for thermographic inspection of electrical systems and rotating equipment. These guidelines help in anomaly detection and estimating its severity. However, it focus only on location of hotspot rather than diagnosing the fault. This paper addresses two such faults i.e. inter-turn fault and failure of cooling system, where both results in increase of stator temperature. Present paper proposes two thermal profile indicators using thermal analysis of IRT images. These indicators are in compliance with NETA standard. These indicators help in correctly diagnosing inter-turn fault and failure of cooling system. The work has been experimentally validated for healthy and with seeded faults scenarios of induction motors.

  7. Using process groups to implement failure detection in asynchronous environments

    NASA Technical Reports Server (NTRS)

    Ricciardi, Aleta M.; Birman, Kenneth P.

    1991-01-01

    Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.

  8. Ferrographic and spectrographic analysis of oil sampled before and after failure of a jet engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1980-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph as well as plasma, atomic absorption, and emission spectrometers. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism, nor a high level of wear debris was detected in the oil sample from the engine just prior to the test in which the failure occurred. However, low concentrations of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure.

  9. Evidence-Based Early Reading Practices within a Response to Intervention System

    ERIC Educational Resources Information Center

    Bursuck, Bill; Blanks, Brooke

    2010-01-01

    Many students who experience reading failure are inappropriately placed in special education. A promising response to reducing reading failure and the overidentification of students for special education is Response to Intervention (RTI), a comprehensive early detection and prevention system that allows teachers to identify and support struggling…

  10. Evaluation of Fuzzy Rulemaking for Expert Systems for Failure Detection

    NASA Technical Reports Server (NTRS)

    Laritz, F.; Sheridan, T. B.

    1984-01-01

    Computer aids in expert systems were proposed to diagnose failures in complex systems. It is shown that the fuzzy set theory of Zadeh offers a new perspective for modeling for humans thinking and language use. It is assumed that real expert human operators of aircraft, power plants and other systems do not think of their control tasks or failure diagnosis tasks in terms of control laws in differential equation form, but rather keep in mind a set of rules of thumb in fuzzy form. Fuzzy set experiments are described.

  11. Ampoule Failure System

    NASA Technical Reports Server (NTRS)

    Watring, Dale A. (Inventor); Johnson, Martin L. (Inventor)

    1996-01-01

    An ampoule failure system for use in material processing furnaces comprising a containment cartridge and an ampoule failure sensor. The containment cartridge contains an ampoule of toxic material therein and is positioned within a furnace for processing. An ampoule failure probe is positioned in the containment cartridge adjacent the ampoule for detecting a potential harmful release of toxic material therefrom during processing. The failure probe is spaced a predetermined distance from the ampoule and is chemically chosen so as to undergo a timely chemical reaction with the toxic material upon the harmful release thereof. The ampoule failure system further comprises a data acquisition system which is positioned externally of the furnace and is electrically connected to the ampoule failure probe so as to form a communicating electrical circuit. The data acquisition system includes an automatic shutdown device for shutting down the furnace upon the harmful release of toxic material. It also includes a resistance measuring device for measuring the resistance of the failure probe during processing. The chemical reaction causes a step increase in resistance of the failure probe whereupon the automatic shutdown device will responsively shut down the furnace.

  12. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  13. Impaired face detection may explain some but not all cases of developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Duchaine, Brad

    2016-05-01

    Developmental prosopagnosia (DP) is defined by severe face recognition difficulties due to the failure to develop the visual mechanisms for processing faces. The two-process theory of face recognition (Morton & Johnson, 1991) implies that DP could result from a failure of an innate face detection system; this failure could prevent an individual from then tuning higher-level processes for face recognition (Johnson, 2005). Work with adults indicates that some individuals with DP have normal face detection whereas others are impaired. However, face detection has not been addressed in children with DP, even though their results may be especially informative because they have had less opportunity to develop strategies that could mask detection deficits. We tested the face detection abilities of seven children with DP. Four were impaired at face detection to some degree (i.e. abnormally slow, or failed to find faces) while the remaining three children had normal face detection. Hence, the cases with impaired detection are consistent with the two-process account suggesting that DP could result from a failure of face detection. However, the cases with normal detection implicate a higher-level origin. The dissociation between normal face detection and impaired identity perception also indicates that these abilities depend on different neurocognitive processes. © 2015 John Wiley & Sons Ltd.

  14. Redundancy management of multiple KT-70 inertial measurement units applicable to the space shuttle

    NASA Technical Reports Server (NTRS)

    Cook, L. J.

    1975-01-01

    Results of an investigation of velocity failure detection and isolation for 3 inertial measuring units (IMU) and 2 inertial measuring units (IMU) configurations are presented. The failure detection and isolation algorithm performance was highly successful and most types of velocity errors were detected and isolated. The failure detection and isolation algorithm also included attitude FDI but was not evaluated because of the lack of time and low resolution in the gimbal angle synchro outputs. The shuttle KT-70 IMUs will have dual-speed resolvers and high resolution gimbal angle readouts. It was demonstrated by these tests that a single computer utilizing a serial data bus can successfully control a redundant 3-IMU system and perform FDI.

  15. Intelligent on-line fault tolerant control for unanticipated catastrophic failures.

    PubMed

    Yen, Gary G; Ho, Liang-Wei

    2004-10-01

    As dynamic systems become increasingly complex, experience rapidly changing environments, and encounter a greater variety of unexpected component failures, solving the control problems of such systems is a grand challenge for control engineers. Traditional control design techniques are not adequate to cope with these systems, which may suffer from unanticipated dynamic failures. In this research work, we investigate the on-line fault tolerant control problem and propose an intelligent on-line control strategy to handle the desired trajectories tracking problem for systems suffering from various unanticipated catastrophic faults. Through theoretical analysis, the sufficient condition of system stability has been derived and two different on-line control laws have been developed. The approach of the proposed intelligent control strategy is to continuously monitor the system performance and identify what the system's current state is by using a fault detection method based upon our best knowledge of the nominal system and nominal controller. Once a fault is detected, the proposed intelligent controller will adjust its control signal to compensate for the unknown system failure dynamics by using an artificial neural network as an on-line estimator to approximate the unexpected and unknown failure dynamics. The first control law is derived directly from the Lyapunov stability theory, while the second control law is derived based upon the discrete-time sliding mode control technique. Both control laws have been implemented in a variety of failure scenarios to validate the proposed intelligent control scheme. The simulation results, including a three-tank benchmark problem, comply with theoretical analysis and demonstrate a significant improvement in trajectory following performance based upon the proposed intelligent control strategy.

  16. Sensor failure detection for jet engines

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.

    1988-01-01

    The use of analytical redundancy to improve gas turbine engine control system reliability through sensor failure detection, isolation, and accommodation is surveyed. Both the theoretical and application papers that form the technology base of turbine engine analytical redundancy research are discussed. Also, several important application efforts are reviewed. An assessment of the state-of-the-art in analytical redundancy technology is given.

  17. Detection of structural deterioration and associated airline maintenance problems

    NASA Technical Reports Server (NTRS)

    Henniker, H. D.; Mitchell, R. G.

    1972-01-01

    Airline operations involving the detection of structural deterioration and associated maintenance problems are discussed. The standard approach to the maintenance and inspection of aircraft components and systems is described. The frequency of inspections and the application of preventive maintenance practices are examined. The types of failure which airline transport aircraft encounter and the steps taken to prevent catastrophic failure are reported.

  18. The analysis of the pilot's cognitive and decision processes

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    Articles are presented on pilot performance in zero-visibility precision approach, failure detection by pilots during automatic landing, experiments in pilot decision-making during simulated low visibility approaches, a multinomial maximum likelihood program, and a random search algorithm for laboratory computers. Other topics discussed include detection of system failures in multi-axis tasks and changes in pilot workload during an instrument landing.

  19. Tapered Roller Bearing Damage Detection Using Decision Fusion Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Kreider, Gary; Fichter, Thomas

    2006-01-01

    A diagnostic tool was developed for detecting fatigue damage of tapered roller bearings. Tapered roller bearings are used in helicopter transmissions and have potential for use in high bypass advanced gas turbine aircraft engines. A diagnostic tool was developed and evaluated experimentally by collecting oil debris data from failure progression tests conducted using health monitoring hardware. Failure progression tests were performed with tapered roller bearings under simulated engine load conditions. Tests were performed on one healthy bearing and three pre-damaged bearings. During each test, data from an on-line, in-line, inductance type oil debris sensor and three accelerometers were monitored and recorded for the occurrence of bearing failure. The bearing was removed and inspected periodically for damage progression throughout testing. Using data fusion techniques, two different monitoring technologies, oil debris analysis and vibration, were integrated into a health monitoring system for detecting bearing surface fatigue pitting damage. The data fusion diagnostic tool was evaluated during bearing failure progression tests under simulated engine load conditions. This integrated system showed improved detection of fatigue damage and health assessment of the tapered roller bearings as compared to using individual health monitoring technologies.

  20. Weld failure detection

    DOEpatents

    Pennell, William E.; Sutton, Jr., Harry G.

    1981-01-01

    Method and apparatus for detecting failure in a welded connection, particrly applicable to not readily accessible welds such as those joining components within the reactor vessel of a nuclear reactor system. A preselected tag gas is sealed within a chamber which extends through selected portions of the base metal and weld deposit. In the event of a failure, such as development of a crack extending from the chamber to an outer surface, the tag gas is released. The environment about the welded area is directed to an analyzer which, in the event of presence of the tag gas, evidences the failure. A trigger gas can be included with the tag gas to actuate the analyzer.

  1. Ferrographic and spectrometer oil analysis from a failed gas turbine engine

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.

    1983-01-01

    An experimental gas turbine engine was destroyed as a result of the combustion of its titanium components. It was concluded that a severe surge may have caused interference between rotating and stationary compressor parts that either directly or indirectly ignited the titanium components. Several engine oil samples (before and after the failure) were analyzed with a Ferrograph, and with plasma, atomic absorption, and emission spectrometers to see if this information would aid in the engine failure diagnosis. The analyses indicated that a lubrication system failure was not a causative factor in the engine failure. Neither an abnormal wear mechanism nor a high level of wear debris was detected in the engine oil sample taken just prior to the test in which the failure occurred. However, low concentrations (0.2 to 0.5 ppm) of titanium were evident in this sample and samples taken earlier. After the failure, higher titanium concentrations (2 ppm) were detected in oil samples taken from different engine locations. Ferrographic analysis indicated that most of the titanium was contained in spherical metallic debris after the failure. The oil analyses eliminated a lubrication system bearing or shaft seal failure as the cause of the engine failure. Previously announced in STAR as N83-12433

  2. Vibration detection of component health and operability

    NASA Technical Reports Server (NTRS)

    Baird, B. C.

    1975-01-01

    In order to prevent catastrophic failure and eliminate unnecessary periodic maintenance in the shuttle orbiter program environmental control system components, some means of detecting incipient failure in these components is required. The utilization was investigated of vibrational/acoustic phenomena as one of the principal physical parameters on which to base the design of this instrumentation. Baseline vibration/acoustic data was collected from three aircraft type fans and two aircraft type pumps over a frequency range from a few hertz to greater than 3000 kHz. The baseline data included spectrum analysis of the baseband vibration signal, spectrum analysis of the detected high frequency bandpass acoustic signal, and amplitude distribution of the high frequency bandpass acoustic signal. A total of eight bearing defects and two unbalancings was introduced into the five test items. All defects were detected by at least one of a set of vibration/acoustic parameters with a margin of at least 2:1 over the worst case baseline. The design of a portable instrument using this set of vibration/acoustic parameters for detecting incipient failures in environmental control system components is described.

  3. Comprehension and retrieval of failure cases in airborne observatories

    NASA Technical Reports Server (NTRS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-01-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  4. Comprehension and retrieval of failure cases in airborne observatories

    NASA Astrophysics Data System (ADS)

    Alvarado, Sergio J.; Mock, Kenrick J.

    1995-05-01

    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis.

  5. Reliable dual-redundant sensor failure detection and identification for the NASA F-8 DFBW aircraft

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.; Desai, M. N.; Deyst, J. J., Jr.; Willsky, A. S.

    1978-01-01

    A technique was developed which provides reliable failure detection and identification (FDI) for a dual redundant subset of the flight control sensors onboard the NASA F-8 digital fly by wire (DFBW) aircraft. The technique was successfully applied to simulated sensor failures on the real time F-8 digital simulator and to sensor failures injected on telemetry data from a test flight of the F-8 DFBW aircraft. For failure identification the technique utilized the analytic redundancy which exists as functional and kinematic relationships among the various quantities being measured by the different control sensor types. The technique can be used not only in a dual redundant sensor system, but also in a more highly redundant system after FDI by conventional voting techniques reduced to two the number of unfailed sensors of a particular type. In addition the technique can be easily extended to the case in which only one sensor of a particular type is available.

  6. An evaluation of a real-time fault diagnosis expert system for aircraft applications

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.

    1987-01-01

    A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.

  7. A study of redundancy management strategy for tetrad strap-down inertial systems. [error detection codes

    NASA Technical Reports Server (NTRS)

    Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.

    1979-01-01

    Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.

  8. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  9. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  10. Monitoring System for Storm Readiness and Recovery of Test Facilities: Integrated System Health Management (ISHM) Approach

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera; Schmalzel, John

    2010-01-01

    Severe weather events are likely occurrences on the Mississippi Gulf Coast. It is important to rapidly diagnose and mitigate the effects of storms on Stennis Space Center's rocket engine test complex to avoid delays to critical test article programs, reduce costs, and maintain safety. An Integrated Systems Health Management (ISHM) approach and technologies are employed to integrate environmental (weather) monitoring, structural modeling, and the suite of available facility instrumentation to provide information for readiness before storms, rapid initial damage assessment to guide mitigation planning, and then support on-going assurance as repairs are effected and finally support recertification. The system is denominated Katrina Storm Monitoring System (KStorMS). Integrated Systems Health Management (ISHM) describes a comprehensive set of capabilities that provide insight into the behavior the health of a system. Knowing the status of a system allows decision makers to effectively plan and execute their mission. For example, early insight into component degradation and impending failures provides more time to develop work around strategies and more effectively plan for maintenance. Failures of system elements generally occur over time. Information extracted from sensor data, combined with system-wide knowledge bases and methods for information extraction and fusion, inference, and decision making, can be used to detect incipient failures. If failures do occur, it is critical to detect and isolate them, and suggest an appropriate course of action. ISHM enables determining the condition (health) of every element in a complex system-of-systems or SoS (detect anomalies, diagnose causes, predict future anomalies), and provide data, information, and knowledge (DIaK) to control systems for safe and effective operation. ISHM capability is achieved by using a wide range of technologies that enable anomaly detection, diagnostics, prognostics, and advise for control: (1) anomaly detection algorithms and strategies, (2) fusion of DIaK for anomaly detection (model-based, numerical, statistical, empirical, expert-based, qualitative, etc.), (3) diagnostics/prognostics strategies and methods, (4) user interface, (5) advanced control strategies, (6) integration architectures/frameworks, (7) embedding of intelligence. Many of these technologies are mature, and they are being used in the KStorMS. The paper will describe the design, implementation, and operation of the KStorMS; and discuss further evolution to support other needs such as condition-based maintenance (CBM).

  11. Modeling of a bubble-memory organization with self-checking translators to achieve high reliability.

    NASA Technical Reports Server (NTRS)

    Bouricius, W. G.; Carter, W. C.; Hsieh, E. P.; Wadia, A. B.; Jessep, D. C., Jr.

    1973-01-01

    Study of the design and modeling of a highly reliable bubble-memory system that has the capabilities of: (1) correcting a single 16-adjacent bit-group error resulting from failures in a single basic storage module (BSM), and (2) detecting with a probability greater than 0.99 any double errors resulting from failures in BSM's. The results of the study justify the design philosophy adopted of employing memory data encoding and a translator to correct single group errors and detect double group errors to enhance the overall system reliability.

  12. Ultrasonic Maintenance

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Ultraprobe 2000, manufactured by UE Systems, Inc., Elmsford, NY, is a hand-held ultrasonic system that detects indications of bearing failure by analyzing changes in amplitude. It employs the technology of a prototype ultrasonic bearing-failure monitoring system developed by Mechanical Technology, Inc., Latham, New York and Marshall Space Flight Center (which was based on research into Skylab's gyroscope bearings). Bearings on the verge of failure send ultrasonic signals indicating their deterioration; the Ultraprobe changes these to audible signals. The operator hears the signals and gages their intensity with a meter in the unit.

  13. Flight test results of the Strapdown hexad Inertial Reference Unit (SIRU). Volume 1: Flight test summary

    NASA Technical Reports Server (NTRS)

    Hruby, R. J.; Bjorkman, W. S.

    1977-01-01

    Flight test results of the strapdown inertial reference unit (SIRU) navigation system are presented. The fault-tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance.

  14. 30 CFR 75.1912 - Fire suppression systems for permanent underground diesel fuel storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... electrical system failure. (g) Electrically operated detection and actuation circuits shall be monitored and... operated, a means shall be provided to indicate the functional readiness status of the detection system. (h... susceptible to alteration or recorded electronically in a secured computer system that is not susceptible to...

  15. 30 CFR 75.1912 - Fire suppression systems for permanent underground diesel fuel storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... electrical system failure. (g) Electrically operated detection and actuation circuits shall be monitored and... operated, a means shall be provided to indicate the functional readiness status of the detection system. (h... susceptible to alteration or recorded electronically in a secured computer system that is not susceptible to...

  16. 30 CFR 75.1912 - Fire suppression systems for permanent underground diesel fuel storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... electrical system failure. (g) Electrically operated detection and actuation circuits shall be monitored and... operated, a means shall be provided to indicate the functional readiness status of the detection system. (h... susceptible to alteration or recorded electronically in a secured computer system that is not susceptible to...

  17. 30 CFR 75.1912 - Fire suppression systems for permanent underground diesel fuel storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... electrical system failure. (g) Electrically operated detection and actuation circuits shall be monitored and... operated, a means shall be provided to indicate the functional readiness status of the detection system. (h... susceptible to alteration or recorded electronically in a secured computer system that is not susceptible to...

  18. 30 CFR 75.1912 - Fire suppression systems for permanent underground diesel fuel storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... electrical system failure. (g) Electrically operated detection and actuation circuits shall be monitored and... operated, a means shall be provided to indicate the functional readiness status of the detection system. (h... susceptible to alteration or recorded electronically in a secured computer system that is not susceptible to...

  19. Fault Injection Techniques and Tools

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.

    1997-01-01

    Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.

  20. Overview of the Smart Network Element Architecture and Recent Innovations

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.

    2008-01-01

    In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.

  1. Respiratory failure in diabetic ketoacidosis.

    PubMed

    Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H

    2015-07-25

    Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA.

  2. Respiratory failure in diabetic ketoacidosis

    PubMed Central

    Konstantinov, Nikifor K; Rohrscheib, Mark; Agaba, Emmanuel I; Dorin, Richard I; Murata, Glen H; Tzamaloukas, Antonios H

    2015-01-01

    Respiratory failure complicating the course of diabetic ketoacidosis (DKA) is a source of increased morbidity and mortality. Detection of respiratory failure in DKA requires focused clinical monitoring, careful interpretation of arterial blood gases, and investigation for conditions that can affect adversely the respiration. Conditions that compromise respiratory function caused by DKA can be detected at presentation but are usually more prevalent during treatment. These conditions include deficits of potassium, magnesium and phosphate and hydrostatic or non-hydrostatic pulmonary edema. Conditions not caused by DKA that can worsen respiratory function under the added stress of DKA include infections of the respiratory system, pre-existing respiratory or neuromuscular disease and miscellaneous other conditions. Prompt recognition and management of the conditions that can lead to respiratory failure in DKA may prevent respiratory failure and improve mortality from DKA. PMID:26240698

  3. Parametric Testing of Launch Vehicle FDDR Models

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar

    2011-01-01

    For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.

  4. Putting Integrated Systems Health Management Capabilities to Work: Development of an Advanced Caution and Warning System for Next-Generation Crewed Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Mccann, Robert S.; Spirkovska, Lilly; Smith, Irene

    2013-01-01

    Integrated System Health Management (ISHM) technologies have advanced to the point where they can provide significant automated assistance with real-time fault detection, diagnosis, guided troubleshooting, and failure consequence assessment. To exploit these capabilities in actual operational environments, however, ISHM information must be integrated into operational concepts and associated information displays in ways that enable human operators to process and understand the ISHM system information rapidly and effectively. In this paper, we explore these design issues in the context of an advanced caution and warning system (ACAWS) for next-generation crewed spacecraft missions. User interface concepts for depicting failure diagnoses, failure effects, redundancy loss, "what-if" failure analysis scenarios, and resolution of ambiguity groups are discussed and illustrated.

  5. NMESys: An expert system for network fault detection

    NASA Technical Reports Server (NTRS)

    Nelson, Peter C.; Warpinski, Janet

    1991-01-01

    The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.

  6. Flight test results of the strapdown hexad inertial reference unit (SIRU). Volume 2: Test report

    NASA Technical Reports Server (NTRS)

    Hruby, R. J.; Bjorkman, W. S.

    1977-01-01

    Results of flight tests of the Strapdown Inertial Reference Unit (SIRU) navigation system are presented. The fault tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance. Performance shortcomings are analyzed.

  7. Sensors and systems for space applications: a methodology for developing fault detection, diagnosis, and recovery

    NASA Astrophysics Data System (ADS)

    Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun

    2007-04-01

    Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.

  8. Direct Adaptive Control of Systems with Actuator Failures: State of the Art and Continuing Challenges

    NASA Technical Reports Server (NTRS)

    Tao, Gang; Joshi, Suresh M.

    2008-01-01

    In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.

  9. Model-Biased, Data-Driven Adaptive Failure Prediction

    NASA Technical Reports Server (NTRS)

    Leen, Todd K.

    2004-01-01

    This final report, which contains a research summary and a viewgraph presentation, addresses clustering and data simulation techniques for failure prediction. The researchers applied their techniques to both helicopter gearbox anomaly detection and segmentation of Earth Observing System (EOS) satellite imagery.

  10. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  11. Crystal growth furnace safety system validation

    NASA Technical Reports Server (NTRS)

    Mackowski, D. W.; Hartfield, R.; Bhavnani, S. H.; Belcher, V. M.

    1994-01-01

    The findings are reported regarding the safe operation of the NASA crystal growth furnace (CGF) and potential methods for detecting containment failures of the furnace. The main conclusions are summarized by ampoule leak detection, cartridge leak detection, and detection of hazardous species in the experiment apparatus container (EAC).

  12. Lyapunov-Based Sensor Failure Detection And Recovery For The Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2001-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in terms of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  13. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  14. Analytical redundancy and the design of robust failure detection systems

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1984-01-01

    The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653

  15. CRYOGENIC UPPER STAGE SYSTEM SAFETY

    NASA Technical Reports Server (NTRS)

    Smith, R. Kenneth; French, James V.; LaRue, Peter F.; Taylor, James L.; Pollard, Kathy (Technical Monitor)

    2005-01-01

    NASA s Exploration Initiative will require development of many new systems or systems of systems. One specific example is that safe, affordable, and reliable upper stage systems to place cargo and crew in stable low earth orbit are urgently required. In this paper, we examine the failure history of previous upper stages with liquid oxygen (LOX)/liquid hydrogen (LH2) propulsion systems. Launch data from 1964 until midyear 2005 are analyzed and presented. This data analysis covers upper stage systems from the Ariane, Centaur, H-IIA, Saturn, and Atlas in addition to other vehicles. Upper stage propulsion system elements have the highest impact on reliability. This paper discusses failure occurrence in all aspects of the operational phases (Le., initial burn, coast, restarts, and trends in failure rates over time). In an effort to understand the likelihood of future failures in flight, we present timelines of engine system failures relevant to initial flight histories. Some evidence suggests that propulsion system failures as a result of design problems occur shortly after initial development of the propulsion system; whereas failures because of manufacturing or assembly processing errors may occur during any phase of the system builds process, This paper also explores the detectability of historical failures. Observations from this review are used to ascertain the potential for increased upper stage reliability given investments in integrated system health management. Based on a clear understanding of the failure and success history of previous efforts by multiple space hardware development groups, the paper will investigate potential improvements that can be realized through application of system safety principles.

  16. Real-Time Detection of Infusion Site Failures in a Closed-Loop Artificial Pancreas.

    PubMed

    Howsmon, Daniel P; Baysal, Nihat; Buckingham, Bruce A; Forlenza, Gregory P; Ly, Trang T; Maahs, David M; Marcal, Tatiana; Towers, Lindsey; Mauritzen, Eric; Deshpande, Sunil; Huyett, Lauren M; Pinsker, Jordan E; Gondhalekar, Ravi; Doyle, Francis J; Dassau, Eyal; Hahn, Juergen; Bequette, B Wayne

    2018-05-01

    As evidence emerges that artificial pancreas systems improve clinical outcomes for patients with type 1 diabetes, the burden of this disease will hopefully begin to be alleviated for many patients and caregivers. However, reliance on automated insulin delivery potentially means patients will be slower to act when devices stop functioning appropriately. One such scenario involves an insulin infusion site failure, where the insulin that is recorded as delivered fails to affect the patient's glucose as expected. Alerting patients to these events in real time would potentially reduce hyperglycemia and ketosis associated with infusion site failures. An infusion site failure detection algorithm was deployed in a randomized crossover study with artificial pancreas and sensor-augmented pump arms in an outpatient setting. Each arm lasted two weeks. Nineteen participants wore infusion sets for up to 7 days. Clinicians contacted patients to confirm infusion site failures detected by the algorithm and instructed on set replacement if failure was confirmed. In real time and under zone model predictive control, the infusion site failure detection algorithm achieved a sensitivity of 88.0% (n = 25) while issuing only 0.22 false positives per day, compared with a sensitivity of 73.3% (n = 15) and 0.27 false positives per day in the SAP arm (as indicated by retrospective analysis). No association between intervention strategy and duration of infusion sets was observed ( P = .58). As patient burden is reduced by each generation of advanced diabetes technology, fault detection algorithms will help ensure that patients are alerted when they need to manually intervene. Clinical Trial Identifier: www.clinicaltrials.gov,NCT02773875.

  17. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  18. The implementation and use of Ada on distributed systems with reliability requirements

    NASA Technical Reports Server (NTRS)

    Reynolds, P. F.; Knight, J. C.; Urquhart, J. I. A.

    1983-01-01

    The issues involved in the use of the programming language Ada on distributed systems are discussed. The effects of Ada programs on hardware failures such as loss of a processor are emphasized. It is shown that many Ada language elements are not well suited to this environment. Processor failure can easily lead to difficulties on those processors which remain. As an example, the calling task in a rendezvous may be suspended forever if the processor executing the serving task fails. A mechanism for detecting failure is proposed and changes to the Ada run time support system are suggested which avoid most of the difficulties. Ada program structures are defined which allow programs to reconfigure and continue to provide service following processor failure.

  19. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  20. Research on measurement of aviation magneto ignition strength and balance

    NASA Astrophysics Data System (ADS)

    Gao, Feng; He, Zhixiang; Zhang, Dingpeng

    2017-12-01

    Aviation magneto ignition system failure accounted for two-thirds of the total fault aviation piston engine and above. At present the method used for this failure diagnosis is often depended on the visual inspections in the civil aviation maintenance field. Due to human factors, the visual inspections cannot provide ignition intensity value and ignition equilibrium deviation value among the different spark plugs in the different cylinder of aviation piston engine. So air magneto ignition strength and balance testing has become an aviation piston engine maintenance technical problem needed to resolve. In this paper, the ultraviolet sensor with detection wavelength of 185~260nm and driving voltage of 320V DC is used as the core of ultraviolet detection to detect the ignition intensity of Aviation magneto ignition system and the balance deviation of the ignition intensity of each cylinder. The experimental results show that the rotational speed within the range 0 to 3500 RPM test error less than 0.34%, ignition strength analysis and calculation error is less than 0.13%, and measured the visual inspection is hard to distinguish between high voltage wire leakage failure of deviation value of 200 pulse ignition strength balance/Sec. The method to detect aviation piston engine maintenance of magneto ignition system fault has a certain reference value.

  1. 40 CFR 63.163 - Standards: Pumps in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Equipped with a barrier fluid degassing reservoir that is routed to a process or fuel gas system or... with a sensor that will detect failure of the seal system, the barrier fluid system, or both. (4) Each... per million or greater is measured, a leak is detected. (5) Each sensor as described in paragraph (e...

  2. Robust detection, isolation and accommodation for sensor failures

    NASA Technical Reports Server (NTRS)

    Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.

    1986-01-01

    The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques

  3. Gyro and accelerometer failure detection and identification in redundant sensor systems

    NASA Technical Reports Server (NTRS)

    Potter, J. E.; Deckert, J. C.

    1972-01-01

    Algorithms for failure detection and identification for redundant noncolinear arrays of single degree of freedom gyros and accelerometers are described. These algorithms are optimum in the sense that detection occurs as soon as it is no longer possible to account for the instrument outputs as the outputs of good instruments operating within their noise tolerances, and identification occurs as soon as it is true that only a particular instrument failure could account for the actual instrument outputs within the noise tolerance of good instruments. An estimation algorithm is described which minimizes the maximum possible estimation error magnitude for the given set of instrument outputs. Monte Carlo simulation results are presented for the application of the algorithms to an inertial reference unit consisting of six gyros and six accelerometers in two alternate configurations.

  4. A Study of Failure Events in Drinking Water Systems As a Basis for Comparison and Evaluation of the Efficacy of Potable Reuse Schemes

    PubMed Central

    Onyango, Laura A.; Quinn, Chloe; Tng, Keng H.; Wood, James G.; Leslie, Greg

    2015-01-01

    Potable reuse is implemented in several countries around the world to augment strained water supplies. This article presents a public health perspective on potable reuse by comparing the critical infrastructure and institutional capacity characteristics of two well-established potable reuse schemes with conventional drinking water schemes in developed nations that have experienced waterborne outbreaks. Analysis of failure events in conventional water systems between 2003 and 2013 showed that despite advances in water treatment technologies, drinking water outbreaks caused by microbial contamination were still frequent in developed countries and can be attributed to failures in infrastructure or institutional practices. Numerous institutional failures linked to ineffective treatment protocols, poor operational practices, and negligence were detected. In contrast, potable reuse schemes that use multiple barriers, online instrumentation, and operational measures were found to address the events that have resulted in waterborne outbreaks in conventional systems in the past decade. Syndromic surveillance has emerged as a tool in outbreak detection and was useful in detecting some outbreaks; increases in emergency department visits and GP consultations being the most common data source, suggesting potential for an increasing role in public health surveillance of waterborne outbreaks. These results highlight desirable characteristics of potable reuse schemes from a public health perspective with potential for guiding policy on surveillance activities. PMID:27053920

  5. A Study of Failure Events in Drinking Water Systems As a Basis for Comparison and Evaluation of the Efficacy of Potable Reuse Schemes.

    PubMed

    Onyango, Laura A; Quinn, Chloe; Tng, Keng H; Wood, James G; Leslie, Greg

    2015-01-01

    Potable reuse is implemented in several countries around the world to augment strained water supplies. This article presents a public health perspective on potable reuse by comparing the critical infrastructure and institutional capacity characteristics of two well-established potable reuse schemes with conventional drinking water schemes in developed nations that have experienced waterborne outbreaks. Analysis of failure events in conventional water systems between 2003 and 2013 showed that despite advances in water treatment technologies, drinking water outbreaks caused by microbial contamination were still frequent in developed countries and can be attributed to failures in infrastructure or institutional practices. Numerous institutional failures linked to ineffective treatment protocols, poor operational practices, and negligence were detected. In contrast, potable reuse schemes that use multiple barriers, online instrumentation, and operational measures were found to address the events that have resulted in waterborne outbreaks in conventional systems in the past decade. Syndromic surveillance has emerged as a tool in outbreak detection and was useful in detecting some outbreaks; increases in emergency department visits and GP consultations being the most common data source, suggesting potential for an increasing role in public health surveillance of waterborne outbreaks. These results highlight desirable characteristics of potable reuse schemes from a public health perspective with potential for guiding policy on surveillance activities.

  6. Optimally Robust Redundancy Relations for Failure Detection in Uncertain Systems,

    DTIC Science & Technology

    1983-04-01

    particular applications. While the general methods provide the basis for what in principle should be a widely applicable failure detection methodology...modifications to this result which overcome them at no fundmental increase in complexity. 4.1 Scaling A critical problem with the criteria of the preceding...criterion which takes scaling into account L 2 s[ (45) As in (38), we can multiply the C. by positive scalars to take into account unequal weightings on

  7. On Restructurable Control System Theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1983-01-01

    The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.

  8. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  9. Advanced detection, isolation, and accommodation of sensor failures in turbofan engines: Real-time microcomputer implementation

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1990-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.

  10. Probabilistic pipe fracture evaluations for leak-rate-detection applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, S.; Ghadiali, N.; Paul, D.

    1995-04-01

    Regulatory Guide 1.45, {open_quotes}Reactor Coolant Pressure Boundary Leakage Detection Systems,{close_quotes} was published by the U.S. Nuclear Regulatory Commission (NRC) in May 1973, and provides guidance on leak detection methods and system requirements for Light Water Reactors. Additionally, leak detection limits are specified in plant Technical Specifications and are different for Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). These leak detection limits are also used in leak-before-break evaluations performed in accordance with Draft Standard Review Plan, Section 3.6.3, {open_quotes}Leak Before Break Evaluation Procedures{close_quotes} where a margin of 10 on the leak detection limit is used in determining the crackmore » size considered in subsequent fracture analyses. This study was requested by the NRC to: (1) evaluate the conditional failure probability for BWR and PWR piping for pipes that were leaking at the allowable leak detection limit, and (2) evaluate the margin of 10 to determine if it was unnecessarily large. A probabilistic approach was undertaken to conduct fracture evaluations of circumferentially cracked pipes for leak-rate-detection applications. Sixteen nuclear piping systems in BWR and PWR plants were analyzed to evaluate conditional failure probability and effects of crack-morphology variability on the current margins used in leak rate detection for leak-before-break.« less

  11. ORCHID - a computer simulation of the reliability of an NDE inspection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moles, M.D.C.

    1987-03-01

    CANDU pressurized heavy water reactors contain several hundred horizontally-mounted zirconium alloy pressure tubes. Following a pressure tube failure, a pressure tube inspection system called CIGARette was rapidly designed, manufactured and put in operation. Defects called hydride blisters were found to be the cause of the failure, and were detected using a combination of eddy current and ultrasonic scans. A number of improvements were made to CIGARette during the inspection period. The ORCHID computer program models the operation of the delivery system, eddy current and ultrasonic systems by imitating the on-reactor decision-making procedure. ORCHID predicts that during the early stage ofmore » development, less than one blistered tube in three would be detected, while less than one in two would be detected in the middle development stage. However, ORCHID predicts that during the late development stage, probability of detection will be over 90%, primarily due to the inclusion of axial ultrasonic scans (a procedural modification). Rotational and axial slip could severely reduce probability of detection. Comparison of CIGARette's inspection data with ORCHID's predictions indicate that the latter are compatible with the actual inspection results, through the numbers are small and data uncertain. It should be emphasized that the CIGARette system has been essentially replaced with the much more reliable CIGAR system.« less

  12. Low-cost failure sensor design and development for water pipeline distribution systems.

    PubMed

    Khan, K; Widdop, P D; Day, A J; Wood, A S; Mounce, S R; Machell, J

    2002-01-01

    This paper describes the design and development of a new sensor which is low cost to manufacture and install and is reliable in operation with sufficient accuracy, resolution and repeatability for use in newly developed systems for pipeline monitoring and leakage detection. To provide an appropriate signal, the concept of a "failure" sensor is introduced, in which the output is not necessarily proportional to the input, but is unmistakably affected when an unusual event occurs. The design of this failure sensor is based on the water opacity which can be indicative of an unusual event in a water distribution network. The laboratory work and field trials necessary to design and prove out this type of failure sensor are described here. It is concluded that a low-cost failure sensor of this type has good potential for use in a comprehensive water monitoring and management system based on Artificial Neural Networks (ANN).

  13. Full Envelope Reconfigurable Control Design for the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Cotting, M. Christopher; Burken, John J.; Lee, Seung-Hee (Technical Monitor)

    2001-01-01

    In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. An Off-line Nonlinear General Constrained Optimization (ONCO) approach was used for the reconfigurable X-33 control design method. Three example failures are shown using a high fidelity 6 DOF simulation (case I ascent with a left body flap jammed at 25 deg.; case 2 entry with a right inboard elevon jam at 25 deg.; and case 3, landing (TAEM) with a left rudder jam at -30 deg.) Failure comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.

  14. 40 CFR 60.51c - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF... Incinerators for Which Construction is Commenced After June 20, 1996 § 60.51c Definitions. Bag leak detection... order to detect bag failures. A bag leak detection system includes, but is not limited to, an instrument...

  15. 40 CFR 60.51c - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF... Incinerators for Which Construction is Commenced After June 20, 1996 § 60.51c Definitions. Bag leak detection... order to detect bag failures. A bag leak detection system includes, but is not limited to, an instrument...

  16. J-2X Abort System Development

    NASA Technical Reports Server (NTRS)

    Santi, Louis M.; Butas, John P.; Aguilar, Robert B.; Sowers, Thomas S.

    2008-01-01

    The J-2X is an expendable liquid hydrogen (LH2)/liquid oxygen (LOX) gas generator cycle rocket engine that is currently being designed as the primary upper stage propulsion element for the new NASA Ares vehicle family. The J-2X engine will contain abort logic that functions as an integral component of the Ares vehicle abort system. This system is responsible for detecting and responding to conditions indicative of impending Loss of Mission (LOM), Loss of Vehicle (LOV), and/or catastrophic Loss of Crew (LOC) failure events. As an earth orbit ascent phase engine, the J-2X is a high power density propulsion element with non-negligible risk of fast propagation rate failures that can quickly lead to LOM, LOV, and/or LOC events. Aggressive reliability requirements for manned Ares missions and the risk of fast propagating J-2X failures dictate the need for on-engine abort condition monitoring and autonomous response capability as well as traditional abort agents such as the vehicle computer, flight crew, and ground control not located on the engine. This paper describes the baseline J-2X abort subsystem concept of operations, as well as the development process for this subsystem. A strategy that leverages heritage system experience and responds to an evolving engine design as well as J-2X specific test data to support abort system development is described. The utilization of performance and failure simulation models to support abort system sensor selection, failure detectability and discrimination studies, decision threshold definition, and abort system performance verification and validation is outlined. The basis for abort false positive and false negative performance constraints is described. Development challenges associated with information shortfalls in the design cycle, abort condition coverage and response assessment, engine-vehicle interface definition, and abort system performance verification and validation are also discussed.

  17. A state-based approach to trend recognition and failure prediction for the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Nelson, Kyle S.; Hadden, George D.

    1992-01-01

    A state-based reasoning approach to trend recognition and failure prediction for the Altitude Determination, and Control System (ADCS) of the Space Station Freedom (SSF) is described. The problem domain is characterized by features (e.g., trends and impending failures) that develop over a variety of time spans, anywhere from several minutes to several years. Our state-based reasoning approach, coupled with intelligent data screening, allows features to be tracked as they develop in a time-dependent manner. That is, each state machine has the ability to encode a time frame for the feature it detects. As features are detected, they are recorded and can be used as input to other state machines, creating a hierarchical feature recognition scheme. Furthermore, each machine can operate independently of the others, allowing simultaneous tracking of features. State-based reasoning was implemented in the trend recognition and the prognostic modules of a prototype Space Station Freedom Maintenance and Diagnostic System (SSFMDS) developed at Honeywell's Systems and Research Center.

  18. Inductive Learning Approaches for Improving Pilot Awareness of Aircraft Faults

    NASA Technical Reports Server (NTRS)

    Spikovska, Lilly; Iverson, David L.; Poll, Scott; Pryor, anna

    2005-01-01

    Neural network flight controllers are able to accommodate a variety of aircraft control surface faults without detectable degradation of aircraft handling qualities. Under some faults, however, the effective flight envelope is reduced; this can lead to unexpected behavior if a pilot performs an action that exceeds the remaining control authority of the damaged aircraft. The goal of our work is to increase the pilot s situational awareness by informing him of the type of damage and resulting reduction in flight envelope. Our methodology integrates two inductive learning systems with novel visualization techniques. One learning system, the Inductive Monitoring System (IMS), learns to detect when a simulation includes faulty controls, while two others, Inductive Classification System (INCLASS) and multiple binary decision tree system (utilizing C4.5), determine the type of fault. In off-line training using only non-failure data, IMS constructs a characterization of nominal flight control performance based on control signals issued by the neural net flight controller. This characterization can be used to determine the degree of control augmentation required in the pitch, roll, and yaw command channels to counteract control surface failures. This derived information is typically sufficient to distinguish between the various control surface failures and is used to train both INCLASS and C4.5. Using data from failed control surface flight simulations, INCLASS and C4.5 independently discover and amplify features in IMS results that can be used to differentiate each distinct control surface failure situation. In real-time flight simulations, distinguishing features learned during training are used to classify control surface failures. Knowledge about the type of failure can be used by an additional automated system to alter its approach for planning tactical and strategic maneuvers. The knowledge can also be used directly to increase the pilot s situational awareness and inform manual maneuver decisions. Our multi-modal display of this information provides speech output to issue control surface failure warnings to a lesser-used communication channel and provides graphical displays with pilot-selectable !eve!s of details to issues additional information about the failure. We also describe a potential presentation for flight envelope reduction that can be viewed separately or integrated with an existing attitude indicator instrument. Preliminary results suggest that the inductive approach is capable of detecting that a control surface has failed and determining the type of fault. Furthermore, preliminary evaluations suggest that the interface discloses a concise summary of this information to the pilot.

  19. A real-time simulation evaluation of an advanced detection. Isolation and accommodation algorithm for sensor failures in turbine engines

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.; Delaat, J. C.

    1986-01-01

    An advanced sensor failure detection, isolation, and accommodation (ADIA) algorithm has been developed for use with an aircraft turbofan engine control system. In a previous paper the authors described the ADIA algorithm and its real-time implementation. Subsequent improvements made to the algorithm and implementation are discussed, and the results of an evaluation presented. The evaluation used a real-time, hybrid computer simulation of an F100 turbofan engine.

  20. Study to define low voltage and low temperature operating limits of the Pioneer 10/11 Meteoroid Detection Equipment (MDE) system

    NASA Technical Reports Server (NTRS)

    Parker, C. D.

    1975-01-01

    The Pioneer 10/11 meteoroid detection equipment (MDE) pressure cells were tested at liquid nitrogen (LN2) and liquid helium (LHe) temperatures with the excitation voltage controlled as a parameter. The cells failed by firing because of pressurizing gas condensation as the temperature was lowered from LN2 to LHe temperature and when raised from LHe temperature. A study was conducted to determine cell pressure as a function of temperature, and cell failure was estimated as a function of temperature and excitation voltage. The electronic system was also studied, and a profile of primary spacecraft voltage (nominally 28 Vdc) and temperature corresponding to electronic system failure was determined experimentally.

  1. Modal Acoustic Emission Used at Elevated Temperatures to Detect Damage and Failure Location in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.

    1999-01-01

    Ceramic matrix composites are being developed for elevated-temperature engine applications. A leading material system in this class of materials is silicon carbide (SiC) fiber-reinforced SiC matrix composites. Unfortunately, the nonoxide fibers, matrix, and interphase (boron nitride in this system) can react with oxygen or water vapor in the atmosphere, leading to strength degradation of the composite at elevated temperatures. For this study, constant-load stress-rupture tests were performed in air at temperatures ranging from 815 to 960 C until failure. From these data, predictions can be made for the useful life of such composites under similar stressed-oxidation conditions. During these experiments, the sounds of failure events (matrix cracking and fiber breaking) were monitored with a modal acoustic emission (AE) analyzer through transducers that were attached at the ends of the tensile bars. Such failure events, which are caused by applied stress and oxidation reactions, cause these composites to fail prematurely. Because of the nature of acoustic waveform propagation in thin tensile bars, the location of individual source events and the eventual failure event could be detected accurately.

  2. Predictability in space launch vehicle anomaly detection using intelligent neuro-fuzzy systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Toomarian, Nikzad; Barhen, Jacob; Maccalla, Ayanna; Tawel, Raoul; Thakoor, Anil; Daud, Taher

    1994-01-01

    Included in this viewgraph presentation on intelligent neuroprocessors for launch vehicle health management systems (HMS) are the following: where the flight failures have been in launch vehicles; cumulative delay time; breakdown of operations hours; failure of Mars Probe; vehicle health management (VHM) cost optimizing curve; target HMS-STS auxiliary power unit location; APU monitoring and diagnosis; and integration of neural networks and fuzzy logic.

  3. Reliability issues in active control of large flexible space structures

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.

    1986-01-01

    Efforts in this reporting period were centered on four research tasks: design of failure detection filters for robust performance in the presence of modeling errors, design of generalized parity relations for robust performance in the presence of modeling errors, design of failure sensitive observers using the geometric system theory of Wonham, and computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management

  4. Integrated condition monitoring of a fleet of offshore wind turbines with focus on acceleration streaming processing

    NASA Astrophysics Data System (ADS)

    Helsen, Jan; Gioia, Nicoletta; Peeters, Cédric; Jordaens, Pieter-Jan

    2017-05-01

    Particularly offshore there is a trend to cluster wind turbines in large wind farms, and in the near future to operate such a farm as an integrated power production plant. Predictability of individual turbine behavior across the entire fleet is key in such a strategy. Failure of turbine subcomponents should be detected well in advance to allow early planning of all necessary maintenance actions; Such that they can be performed during low wind and low electricity demand periods. In order to obtain the insights to predict component failure, it is necessary to have an integrated clean dataset spanning all turbines of the fleet for a sufficiently long period of time. This paper illustrates our big-data approach to do this. In addition, advanced failure detection algorithms are necessary to detect failures in this dataset. This paper discusses a multi-level monitoring approach that consists of a combination of machine learning and advanced physics based signal-processing techniques. The advantage of combining different data sources to detect system degradation is in the higher certainty due to multivariable criteria. In order to able to perform long-term acceleration data signal processing at high frequency a streaming processing approach is necessary. This allows the data to be analysed as the sensors generate it. This paper illustrates this streaming concept on 5kHz acceleration data. A continuous spectrogram is generated from the data-stream. Real-life offshore wind turbine data is used. Using this streaming approach for calculating bearing failure features on continuous acceleration data will support failure propagation detection.

  5. Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.

    PubMed

    Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente

    2014-07-15

    Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  6. Distributed optical fibre sensing for early detection of shallow landslides triggering.

    PubMed

    Schenato, Luca; Palmieri, Luca; Camporese, Matteo; Bersan, Silvia; Cola, Simonetta; Pasuto, Alessandro; Galtarossa, Andrea; Salandin, Paolo; Simonini, Paolo

    2017-10-31

    A distributed optical fibre sensing system is used to measure landslide-induced strains on an optical fibre buried in a large scale physical model of a slope. The fibre sensing cable is deployed at the predefined failure surface and interrogated by means of optical frequency domain reflectometry. The strain evolution is measured with centimetre spatial resolution until the occurrence of the slope failure. Standard legacy sensors measuring soil moisture and pore water pressure are installed at different depths and positions along the slope for comparison and validation. The evolution of the strain field is related to landslide dynamics with unprecedented resolution and insight. In fact, the results of the experiment clearly identify several phases within the evolution of the landslide and show that optical fibres can detect precursory signs of failure well before the collapse, paving the way for the development of more effective early warning systems.

  7. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  8. The Infrared Automatic Mass Screening (IRAMS) System For Printed Circuit Board Fault Detection

    NASA Astrophysics Data System (ADS)

    Hugo, Perry W.

    1987-05-01

    Office of the Program Manager for TMDE (OPM TMDE) has initiated a program to develop techniques for evaluating the performance of printed circuit boards (PCB's) using infrared thermal imaging. It is OPM TMDE's expectation that the standard thermal profile (STP) will become the basis for the future rapid automatic detection and isolation of gross failure mechanisms on units under test (UUT's). To accomplish this OPM TMDE has purchased two Infrared Automatic Mass Screening ( I RAMS) systems which are scheduled for delivery in 1987. The IRAMS system combines a high resolution infrared thermal imager with a test bench and diagnostic computer hardware and software. Its purpose is to rapidly and automatically compare the thermal profiles of a UUT with the STP of that unit, recalled from memory, in order to detect thermally responsive failure mechanisms in PCB's. This paper will review the IRAMS performance requirements, outline the plan for implementing the two systems and report on progress to date.

  9. Aliasing Signal Separation of Superimposed Abrasive Debris Based on Degenerate Unmixing Estimation Technique.

    PubMed

    Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei

    2018-03-15

    Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.

  10. An adaptive tracking observer for failure-detection systems

    NASA Technical Reports Server (NTRS)

    Sidar, M.

    1982-01-01

    The design problem of adaptive observers applied to linear, constant and variable parameters, multi-input, multi-output systems, is considered. It is shown that, in order to keep the observer's (or Kalman filter) false-alarm rate (FAR) under a certain specified value, it is necessary to have an acceptable proper matching between the observer (or KF) model and the system parameters. An adaptive observer algorithm is introduced in order to maintain desired system-observer model matching, despite initial mismatching and/or system parameter variations. Only a properly designed adaptive observer is able to detect abrupt changes in the system (actuator, sensor failures, etc.) with adequate reliability and FAR. Conditions for convergence for the adaptive process were obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors and accurate and fast parameter identification, in both deterministic and stochastic cases.

  11. A preliminary evaluation of a failure detection filter for detecting and identifying control element failures in a transport aircraft

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1985-01-01

    The application of the failure detection filter to the detection and identification of aircraft control element failures was evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 Aircraft. Simulation results show that with a simple correlator and threshold detector used to process the filter residuals, the failure detection performance is seriously degraded by the effects of turbulence.

  12. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  13. Eigenstructure Assignment for Fault Tolerant Flight Control Design

    NASA Technical Reports Server (NTRS)

    Sobel, Kenneth; Joshi, Suresh (Technical Monitor)

    2002-01-01

    In recent years, fault tolerant flight control systems have gained an increased interest for high performance military aircraft as well as civil aircraft. Fault tolerant control systems can be described as either active or passive. An active fault tolerant control system has to either reconfigure or adapt the controller in response to a failure. One approach is to reconfigure the controller based upon detection and identification of the failure. Another approach is to use direct adaptive control to adjust the controller without explicitly identifying the failure. In contrast, a passive fault tolerant control system uses a fixed controller which achieves acceptable performance for a presumed set of failures. We have obtained a passive fault tolerant flight control law for the F/A-18 aircraft which achieves acceptable handling qualities for a class of control surface failures. The class of failures includes the symmetric failure of any one control surface being stuck at its trim value. A comparison was made of an eigenstructure assignment gain designed for the unfailed aircraft with a fault tolerant multiobjective optimization gain. We have shown that time responses for the unfailed aircraft using the eigenstructure assignment gain and the fault tolerant gain are identical. Furthermore, the fault tolerant gain achieves MIL-F-8785C specifications for all failure conditions.

  14. Life Cost Based FMEA Manual: A Step by Step Guide to Carrying Out a Cost-based Failure Modes and Effects Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhee, Seung; Spencer, Cherrill; /Stanford U. /SLAC

    2009-01-23

    Failure occurs when one or more of the intended functions of a product are no longer fulfilled to the customer's satisfaction. The most critical product failures are those that escape design reviews and in-house quality inspection and are found by the customer. The product may work for a while until its performance degrades to an unacceptable level or it may have not worked even before customer took possession of the product. The end results of failures which may lead to unsafe conditions or major losses of the main function are rated high in severity. Failure Modes and Effects Analysis (FMEA)more » is a tool widely used in the automotive, aerospace, and electronics industries to identify, prioritize, and eliminate known potential failures, problems, and errors from systems under design, before the product is released (Stamatis, 1997). Several industrial FMEA standards such as those published by the Society of Automotive Engineers, US Department of Defense, and the Automotive Industry Action Group employ the Risk Priority Number (RPN) to measure risk and severity of failures. The Risk Priority Number (RPN) is a product of 3 indices: Occurrence (O), Severity (S), and Detection (D). In a traditional FMEA process design engineers typically analyze the 'root cause' and 'end-effects' of potential failures in a sub-system or component and assign penalty points through the O, S, D values to each failure. The analysis is organized around categories called failure modes, which link the causes and effects of failures. A few actions are taken upon completing the FMEA worksheet. The RPN column generally will identify the high-risk areas. The idea of performing FMEA is to eliminate or reduce known and potential failures before they reach the customers. Thus, a plan of action must be in place for the next task. Not all failures can be resolved during the product development cycle, thus prioritization of actions must be made within the design group. One definition of detection difficulty (D) is how well the organization controls the development process. Another definition relates to the detectability of a particular failure in the product when it is in the hands of the customer. The former asks 'What is the chance of catching the problem before we give it to the customer'? The latter asks 'What is the chance of the customer catching the problem before the problem results in a catastrophic failure?' (Palady, 1995) These differing definitions confuse the FMEA users when one tries to determine detection difficulty. Are we trying to measure how easy it is to detect where a failure has occurred or when it has occurred? Or are we trying to measure how easy or difficult it is to prevent failures? Ordinal scale variables are used to rank-order industries such as, hotels, restaurants, and movies (Note that a 4 star hotel is not necessarily twice as good as a 2 star hotel). Ordinal values preserve rank in a group of items, but the distance between the values cannot be measured since a distance function does not exist. Thus, the product or sum of ordinal variables loses its rank since each parameter has different scales. The RPN is a product of 3 independent ordinal variables, it can indicate that some failure types are 'worse' than others, but give no quantitative indication of their relative effects. To resolve the ambiguity of measuring detection difficulty and the irrational logic of multiplying 3 ordinal indices, a new methodology was created to overcome these shortcomings, Life Cost-Based FMEA. Life Cost-Based FMEA measures failure/risk in terms of monetary cost. Cost is a universal parameter that can be easily related to severity by engineers and others. Thus, failure cost can be estimated using the following simplest form: Expected Failure Cost = {sup n}{Sigma}{sub i=1}p{sub i}c{sub i}, p: Probability of a particular failure occurring; c: Monetary cost associated with that particular failure; and n: Total number of failure scenarios. FMEA is most effective when there are inputs into it from all concerned disciplines of the product development team. However, FMEA is a long process and can become tedious and won't be effective if too many people participate. An ideal team should have 3 to 4 people from: design, manufacturing, and service departments if possible. Depending on how complex the system is, the entire process can take anywhere from one to four weeks working full time. Thus, it is important to agree to the time commitment before starting the analysis else, anxious managers might stop the procedure before it is completed.« less

  15. A simple, inexpensive video camera setup for the study of avian nest activity

    USGS Publications Warehouse

    Sabine, J.B.; Meyers, J.M.; Schweitzer, Sara H.

    2005-01-01

    Time-lapse video photography has become a valuable tool for collecting data on avian nest activity and depredation; however, commercially available systems are expensive (>USA $4000/unit). We designed an inexpensive system to identify causes of nest failure of American Oystercatchers (Haematopus palliatus) and assessed its utility at Cumberland Island National Seashore, Georgia. We successfully identified raccoon (Procyon lotor), bobcat (Lynx rufus), American Crow (Corvus brachyrhynchos), and ghost crab (Ocypode quadrata) predation on oystercatcher nests. Other detected causes of nest failure included tidal overwash, horse trampling, abandonment, and human destruction. System failure rates were comparable with commercially available units. Our system's efficacy and low cost (<$800) provided useful data for the management and conservation of the American Oystercatcher.

  16. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  17. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  18. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  19. 46 CFR 62.25-1 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... control system; (3) A safety control system, if required by § 62.25-15; (4) Instrumentation to monitor... if instrumentation is not continuously monitored or is inappropriate for detection of a failure or...

  20. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to provide formal connectivity between the nominal (SE), and off-nominal (SHM and FM) aspects of functions and designs. This paper describes a formal modeling approach to the initial phases of the development process that integrates the nominal and off-nominal perspectives in a model that unites SE goals and functions of with the failure to achieve goals and functions (SHM/FM).

  1. Advanced Self-Calibrating, Self-Repairing Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Eckhoff, Anthony J. (Inventor); Angel, Lucena R. (Inventor); Perotti, Jose M. (Inventor)

    2002-01-01

    An improved self-calibrating and self-repairing Data Acquisition System (DAS) for use in inaccessible areas, such as onboard spacecraft, and capable of autonomously performing required system health checks, failure detection. When required, self-repair is implemented utilizing a "spare parts/tool box" system. The available number of spare components primarily depends upon each component's predicted reliability which may be determined using Mean Time Between Failures (MTBF) analysis. Failing or degrading components are electronically removed and disabled to reduce power consumption, before being electronically replaced with spare components.

  2. Tribology symposium -- 1994. PD-Volume 61

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masudi, H.

    This year marks the first Tribology Symposium within the Energy-Sources Technology Conference, sponsored by the ASME Petroleum Division. The program was divided into five sessions: Tribology in High Technology, a historical discussion of some watershed events in tribology; Research/Development, design, research and development on modern manufacturing; Tribology in Manufacturing, the impact of tribology on modern manufacturing; Design/Design Representation, aspects of design related to tribological systems; and Failure Analysis, an analysis of failure, failure detection, and failure monitoring as relating to manufacturing processes. Eleven papers have been processed separately for inclusion on the data base.

  3. On Robustness of Deadlock Detection Algorithms for Distributed Computing Systems.

    DTIC Science & Technology

    1982-02-01

    temrs : nake it much,- ore Eff’: -ult -,o detect, avcii :r -revenn -hsr fn -,he earlier muJtiroaming centralized computing systems. :eadlock :)rever...failure of site C would not have been critical after the B ^ad ’ teen sent. The effect of a type c site (site _ in our examrle’ falling would have no

  4. Models of human problem solving - Detection, diagnosis, and compensation for system failures

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1983-01-01

    The role of the human operator as a problem solver in man-machine systems such as vehicles, process plants, transportation networks, etc. is considered. Problem solving is discussed in terms of detection, diagnosis, and compensation. A wide variety of models of these phases of problem solving are reviewed and specifications for an overall model outlined.

  5. Space Vehicle Reliability Modeling in DIORAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tornga, Shawn Robert

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  6. Closed-Loop Evaluation of an Integrated Failure Identification and Fault Tolerant Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine; Khong, thuan

    2006-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. Such systems developed for failure detection, identification, and reconfiguration, as well as upset recovery, need to be evaluated over broad regions of the flight envelope or under extreme flight conditions, and should include various sources of uncertainty. To apply formal robustness analysis, formulation of linear fractional transformation (LFT) models of complex parameter-dependent systems is required, which represent system uncertainty due to parameter uncertainty and actuator faults. This paper describes a detailed LFT model formulation procedure from the nonlinear model of a transport aircraft by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The closed-loop system is evaluated over the entire flight envelope based on the generated LFT model which can cover nonlinear dynamics. The robustness analysis results of the closed-loop fault tolerant control system of a transport aircraft are presented. A reliable flight envelope (safe flight regime) is also calculated from the robust performance analysis results, over which the closed-loop system can achieve the desired performance of command tracking and failure detection.

  7. Detecting failure of climate predictions

    USGS Publications Warehouse

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  8. Independent Orbiter Assessment (IOA): Analysis of the purge, vent and drain subsystem

    NASA Technical Reports Server (NTRS)

    Bynum, M. C., III

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter PV and D (Purge, Vent and Drain) Subsystem hardware. The PV and D Subsystem controls the environment of unpressurized compartments and window cavities, senses hazardous gases, and purges Orbiter/ET Disconnect. The subsystem is divided into six systems: Purge System (controls the environment of unpressurized structural compartments); Vent System (controls the pressure of unpressurized compartments); Drain System (removes water from unpressurized compartments); Hazardous Gas Detection System (HGDS) (monitors hazardous gas concentrations); Window Cavity Conditioning System (WCCS) (maintains clear windows and provides pressure control of the window cavities); and External Tank/Orbiter Disconnect Purge System (prevents cryo-pumping/icing of disconnect hardware). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Four of the sixty-two failure modes analyzed were determined as single failures which could result in the loss of crew or vehicle. A possible loss of mission could result if any of twelve single failures occurred. Two of the criticality 1/1 failures are in the Window Cavity Conditioning System (WCCS) outer window cavity, where leakage and/or restricted flow will cause failure to depressurize/repressurize the window cavity. Two criticality 1/1 failures represent leakage and/or restricted flow in the Orbiter/ET disconnect purge network which prevent cryopumping/icing of disconnect hardware. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.

  9. Design and fabrication of prototype system for early warning of impending bearing failure

    NASA Technical Reports Server (NTRS)

    Meacher, J.; Chen, H. M.

    1974-01-01

    A test program was conducted with the objective of developing a method and equipment for on-line monitoring of installed ball bearings to detect deterioration or impending failure of the bearings. The program was directed at the spin-axis bearings of a control moment gyro. The bearings were tested at speeds of 6000 and 8000 rpm, thrust loads from 50 to 1000 pounds, with a wide range of lubrication conditions, with and without a simulated fatigue spall implanted in the inner race ball track. It was concluded that a bearing monitor system based on detection and analysis of modulations of a fault indicating bearing resonance frequency can provide a low threshold of sensitivity.

  10. Oxygen sensor signal validation for the safety of the rebreather diver.

    PubMed

    Sieber, Arne; L'abbate, Antonio; Bedini, Remo

    2009-03-01

    In electronically controlled, closed-circuit rebreather diving systems, the partial pressure of oxygen inside the breathing loop is controlled with three oxygen sensors, a microcontroller and a solenoid valve - critical components that may fail. State-of-the-art detection of sensor failure, based on a voting algorithm, may fail under circumstances where two or more sensors show the same but incorrect values. The present paper details a novel rebreather controller that offers true sensor-signal validation, thus allowing efficient and reliable detection of sensor failure. The core components of this validation system are two additional solenoids, which allow an injection of oxygen or diluent gas directly across the sensor membrane.

  11. Real-time diagnostics for a reusable rocket engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Merrill, W.; Duyar, A.

    1992-01-01

    A hierarchical, decentralized diagnostic system is proposed for the Real-Time Diagnostic System component of the Intelligent Control System (ICS) for reusable rocket engines. The proposed diagnostic system has three layers of information processing: condition monitoring, fault mode detection, and expert system diagnostics. The condition monitoring layer is the first level of signal processing. Here, important features of the sensor data are extracted. These processed data are then used by the higher level fault mode detection layer to do preliminary diagnosis on potential faults at the component level. Because of the closely coupled nature of the rocket engine propulsion system components, it is expected that a given engine condition may trigger more than one fault mode detector. Expert knowledge is needed to resolve the conflicting reports from the various failure mode detectors. This is the function of the diagnostic expert layer. Here, the heuristic nature of this decision process makes it desirable to use an expert system approach. Implementation of the real-time diagnostic system described above requires a wide spectrum of information processing capability. Generally, in the condition monitoring layer, fast data processing is often needed for feature extraction and signal conditioning. This is usually followed by some detection logic to determine the selected faults on the component level. Three different techniques are used to attack different fault detection problems in the NASA LeRC ICS testbed simulation. The first technique employed is the neural network application for real-time sensor validation which includes failure detection, isolation, and accommodation. The second approach demonstrated is the model-based fault diagnosis system using on-line parameter identification. Besides these model based diagnostic schemes, there are still many failure modes which need to be diagnosed by the heuristic expert knowledge. The heuristic expert knowledge is implemented using a real-time expert system tool called G2 by Gensym Corp. Finally, the distributed diagnostic system requires another level of intelligence to oversee the fault mode reports generated by component fault detectors. The decision making at this level can best be done using a rule-based expert system. This level of expert knowledge is also implemented using G2.

  12. Redundancy relations and robust failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.

    1984-01-01

    All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.

  13. Sliding mode based fault detection, reconstruction and fault tolerant control scheme for motor systems.

    PubMed

    Mekki, Hemza; Benzineb, Omar; Boukhetala, Djamel; Tadjine, Mohamed; Benbouzid, Mohamed

    2015-07-01

    The fault-tolerant control problem belongs to the domain of complex control systems in which inter-control-disciplinary information and expertise are required. This paper proposes an improved faults detection, reconstruction and fault-tolerant control (FTC) scheme for motor systems (MS) with typical faults. For this purpose, a sliding mode controller (SMC) with an integral sliding surface is adopted. This controller can make the output of system to track the desired position reference signal in finite-time and obtain a better dynamic response and anti-disturbance performance. But this controller cannot deal directly with total system failures. However an appropriate combination of the adopted SMC and sliding mode observer (SMO), later it is designed to on-line detect and reconstruct the faults and also to give a sensorless control strategy which can achieve tolerance to a wide class of total additive failures. The closed-loop stability is proved, using the Lyapunov stability theory. Simulation results in healthy and faulty conditions confirm the reliability of the suggested framework. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.

    DTIC Science & Technology

    1987-07-01

    detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication

  15. Reliability Evaluation of Computer Systems

    DTIC Science & Technology

    1979-04-01

    detection mechanisms. The model rrvided values for the system availa bility, mean time before failure (VITBF) , and the proportion of time that the 4 system...Stanford University Comm~iuter Science 311, (also Electrical Engineering 482), Advanced Computer Organization. Graduate course in computer architeture

  16. Applicability of a Crack-Detection System for Use in Rotor Disk Spin Test Experiments Being Evaluated

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.; Roth, Don J.

    2004-01-01

    Engine makers and aviation safety government institutions continue to have a strong interest in monitoring the health of rotating components in aircraft engines to improve safety and to lower maintenance costs. To prevent catastrophic failure (burst) of the engine, they use nondestructive evaluation (NDE) and major overhauls for periodic inspections to discover any cracks that might have formed. The lowest cost fluorescent penetrant inspection NDE technique can fail to disclose cracks that are tightly closed during rest or that are below the surface. The NDE eddy current system is more effective at detecting both crack types, but it requires careful setup and operation and only a small portion of the disk can be practically inspected. So that sensor systems can sustain normal function in a severe environment, health-monitoring systems require the sensor system to transmit a signal if a crack detected in the component is above a predetermined length (but below the length that would lead to failure) and lastly to act neutrally upon the overall performance of the engine system and not interfere with engine maintenance operations. Therefore, more reliable diagnostic tools and high-level techniques for detecting damage and monitoring the health of rotating components are very essential in maintaining engine safety and reliability and in assessing life.

  17. Multiple IMU system development, volume 1

    NASA Technical Reports Server (NTRS)

    Landey, M.; Mckern, R.

    1974-01-01

    A redundant gimballed inertial system is described. System requirements and mechanization methods are defined and hardware and software development is described. Failure detection and isolation algorithms are presented and technology achievements described. Application of the system as a test tool for shuttle avionics concepts is outlined.

  18. The application of the detection filter to aircraft control surface and actuator failure detection and isolation

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Motyka, P.; Hall, S. R.

    1985-01-01

    The performance of the detection filter in detecting and isolating aircraft control surface and actuator failures is evaluated. The basic detection filter theory assumption of no direct input-output coupling is violated in this application due to the use of acceleration measurements for detecting and isolating failures. With this coupling, residuals produced by control surface failures may only be constrained to a known plane rather than to a single direction. A detection filter design with such planar failure signatures is presented, with the design issues briefly addressed. In addition, a modification to constrain the residual to a single known direction even with direct input-output coupling is also presented. Both the detection filter and the modification are tested using a nonlinear aircraft simulation. While no thresholds were selected, both filters demonstrated an ability to detect control surface and actuator failures. Failure isolation may be a problem if there are several control surfaces which produce similar effects on the aircraft. In addition, the detection filter was sensitive to wind turbulence and modeling errors.

  19. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  20. Deterministic Reconfigurable Control Design for the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Wagner, Elaine A.; Burken, John J.; Hanson, Curtis E.; Wohletz, Jerry M.

    1998-01-01

    In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. Four reconfigurable control design methods were investigated for the X-33 vehicle: Redistributed Pseudo-Inverse, General Constrained Optimization, Automated Failure Dependent Gain Schedule, and an Off-line Nonlinear General Constrained Optimization. The Off-line Nonlinear General Constrained Optimization approach was chosen for implementation on the X-33. Two example failures are shown, a right outboard elevon jam at 25 deg. at a Mach 3 entry condition, and a left rudder jam at 30 degrees. Note however, that reconfigurable control laws have been designed for the entire flight envelope. Comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.

  1. Virtual-Instrument-Based Online Monitoring System for Hands-on Laboratory Experiment of Partial Discharges

    ERIC Educational Resources Information Center

    Karmakar, Subrata

    2017-01-01

    Online monitoring of high-voltage (HV) equipment is a vital tool for early detection of insulation failure. Most insulation failures are caused by partial discharges (PDs) inside the HV equipment. Because of the very high cost of establishing HV equipment facility and the limitations of electromagnetic interference-screened laboratories, only a…

  2. A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon

    2009-01-01

    Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.

  3. From Diagnosis to Action: An Automated Failure Advisor for Human Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; Spirkovska, Lilly; Baskaran, Vijayakumar; Morris, Paul; Mcdermott, William; Ossenfort, John; Bajwa, Anupa

    2015-01-01

    The major goal of current space system development at NASA is to enable human travel to deep space locations such as Mars and asteroids. At that distance, round trip communication with ground operators may take close to an hour, thus it becomes unfeasible to seek ground operator advice for problems that require immediate attention, either for crew safety or for activities that need to be performed at specific times for the attainment of scientific results. To achieve this goal, major reliance will need to be placed on automation systems capable of aiding the crew in detecting and diagnosing failures, assessing consequences of these failures, and providing guidance in repair activities that may be required. We report here on the most current step in the continuing development of such a system, and that is the addition of a Failure Response Advisor. In simple terms, we have a system in place the Advanced Caution and Warning System (ACAWS) to tell us what happened (failure diagnosis) and what happened because that happened (failure effects). The Failure Response Advisor will tell us what to do about it, how long until something must be done and why its important that something be done and will begin to approach the complex reasoning that is generally required for an optimal approach to automated system health management. This advice is based on the criticality and various timing elements, such as durations of activities and of component repairs, failure effects delay, and other factors. The failure advice is provided to operators (crew and mission controllers) together with the diagnostic and effects information. The operators also have the option to drill down for more information about the failure and the reasons for any suggested priorities.

  4. Sensory redundancy management: The development of a design methodology for determining threshold values through a statistical analysis of sensor output data

    NASA Technical Reports Server (NTRS)

    Scalzo, F.

    1983-01-01

    Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.

  5. Analysis of Alerting System Failures in Commercial Aviation Accidents

    NASA Technical Reports Server (NTRS)

    Mumaw, Randall J.

    2017-01-01

    The role of an alerting system is to make the system operator (e.g., pilot) aware of an impending hazard or unsafe state so the hazard can be avoided or managed successfully. A review of 46 commercial aviation accidents (between 1998 and 2014) revealed that, in the vast majority of events, either the hazard was not alerted or relevant hazard alerting occurred but failed to aid the flight crew sufficiently. For this set of events, alerting system failures were placed in one of five phases: Detection, Understanding, Action Selection, Prioritization, and Execution. This study also reviewed the evolution of alerting system schemes in commercial aviation, which revealed naive assumptions about pilot reliability in monitoring flight path parameters; specifically, pilot monitoring was assumed to be more effective than it actually is. Examples are provided of the types of alerting system failures that have occurred, and recommendations are provided for alerting system improvements.

  6. Advances in Micromechanics Modeling of Composites Structures for Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Moncada, Albert

    Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focuses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.

  7. Comparison of different modelling approaches of drive train temperature for the purposes of wind turbine failure detection

    NASA Astrophysics Data System (ADS)

    Tautz-Weinert, J.; Watson, S. J.

    2016-09-01

    Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.

  8. The WorkPlace distributed processing environment

    NASA Technical Reports Server (NTRS)

    Ames, Troy; Henderson, Scott

    1993-01-01

    Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.

  9. [Research and implementation of a real-time monitoring system for running status of medical monitors based on the internet of things].

    PubMed

    Li, Yiming; Qian, Mingli; Li, Long; Li, Bin

    2014-07-01

    This paper proposed a real-time monitoring system for running status of medical monitors based on the internet of things. In the aspect of hardware, a solution of ZigBee networks plus 470 MHz networks is proposed. In the aspect of software, graphical display of monitoring interface and real-time equipment failure alarm is implemented. The system has the function of remote equipment failure detection and wireless localization, which provides a practical and effective method for medical equipment management.

  10. Prognostics using Engineering and Environmental Parameters as Applied to State of Health (SOH) Radionuclide Aerosol Sampler Analyzer (RASA) Real-Time Monitoring

    NASA Astrophysics Data System (ADS)

    Hutchenson, K. D.; Hartley-McBride, S.; Saults, T.; Schmidt, D. P.

    2006-05-01

    The International Monitoring System (IMS) is composed in part of radionuclide particulate and gas monitoring systems. Monitoring the operational status of these systems is an important aspect of nuclear weapon test monitoring. Quality data, process control techniques, and predictive models are necessary to detect and predict system component failures. Predicting failures in advance provides time to mitigate these failures, thus minimizing operational downtime. The Provisional Technical Secretariat (PTS) requires IMS radionuclide systems be operational 95 percent of the time. The United States National Data Center (US NDC) offers contributing components to the IMS. This effort focuses on the initial research and process development using prognostics for monitoring and predicting failures of the RASA two (2) days into the future. The predictions, using time series methods, are input to an expert decision system, called SHADES (State of Health Airflow and Detection Expert System). The results enable personnel to make informed judgments about the health of the RASA system. Data are read from a relational database, processed, and displayed to the user in a GIS as a prototype GUI. This procedure mimics the real time application process that could be implemented as an operational system, This initial proof-of-concept effort developed predictive models focused on RASA components for a single site (USP79). Future work shall include the incorporation of other RASA systems, as well as their environmental conditions that play a significant role in performance. Similarly, SHADES currently accommodates specific component behaviors at this one site. Future work shall also include important environmental variables that play an important part of the prediction algorithms.

  11. System and method for floating-substrate passive voltage contrast

    DOEpatents

    Jenkins, Mark W [Albuquerque, NM; Cole, Jr., Edward I.; Tangyunyong, Paiboon [Albuquerque, NM; Soden, Jerry M [Placitas, NM; Walraven, Jeremy A [Albuquerque, NM; Pimentel, Alejandro A [Albuquerque, NM

    2009-04-28

    A passive voltage contrast (PVC) system and method are disclosed for analyzing ICs to locate defects and failure mechanisms. During analysis a device side of a semiconductor die containing the IC is maintained in an electrically-floating condition without any ground electrical connection while a charged particle beam is scanned over the device side. Secondary particle emission from the device side of the IC is detected to form an image of device features, including electrical vias connected to transistor gates or to other structures in the IC. A difference in image contrast allows the defects or failure mechanisms be pinpointed. Varying the scan rate can, in some instances, produce an image reversal to facilitate precisely locating the defects or failure mechanisms in the IC. The system and method are useful for failure analysis of ICs formed on substrates (e.g. bulk semiconductor substrates and SOI substrates) and other types of structures.

  12. Preparation of calibrated test packages for particle impact noise detection

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A standard calibration method for any particle impact noise detection (PIND) test system used to detect loose particles responsible for failures in hybrid circuits was developed along with a procedure for preparing PIND standard test devices. Hybrid packages were seeded with a single gold ball, hermetically sealed, leak tested, and PIND tested. Conclusions are presented.

  13. Design of analytical failure detection using secondary observers

    NASA Technical Reports Server (NTRS)

    Sisar, M.

    1982-01-01

    The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.

  14. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text.

    PubMed

    Park, Albert; Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-08-31

    The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. The primary objective of this study is to explore an alternative approach-using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap's commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap's mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text.

  15. Predicting, examining, and evaluating FAC in US power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohn, M.J.; Garud, Y.S.; Raad, J. de

    1999-11-01

    There have been many pipe failures in fossil and nuclear power plant piping systems caused by flow-accelerated corrosion (FAC). In some piping systems, this failure mechanism maybe the most important type of damage to mitigate because FAC damage has led to catastrophic failures and fatalities. Detecting the damage and mitigating the problem can significantly reduce future forced outages and increase personnel safety. This article discusses the implementation of recent developments to select FAC inspection locations, perform cost-effective examinations, evaluate results, and mitigate FAC failures. These advances include implementing the combination of software to assist in selecting examination locations and anmore » improved pulsed eddy current technique to scan for wall thinning without removing insulation. The use of statistical evaluation methodology and possible mitigation strategies also are discussed.« less

  16. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Treesearch

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  17. A comparative study of a new wireless continuous cardiorespiratory monitor for the diagnosis and management of patients with congestive heart failure at home.

    PubMed

    Andrews, D; Gouda, M S; Higgins, S; Johnson, P; Williams, A; Vandenburg, M

    2002-01-01

    Congestive heart failure (CHF) is a major and increasing chronic disease in Western society, with a high mortality, morbidity and cost for unplanned hospital admissions. Continuous cardiorespiratory monitoring is required to detect Cheyne-Stokes respiration (CSR). We have tested a new wireless monitoring system and compared it with polysomnography (PSG) and respiratory inductance plethysmography (RIP) in six CHF patients with CSR in a sleep laboratory. The wireless system compared well with RIP for the detection of CSR but less well with PSG, which had unexpected but significant respiratory sensing errors that led to misclassification of the respiratory disorder present. The wireless system could be used to select CHF patients for better-customized treatment at home as part of a specialist-supported community telemedicine programme.

  18. An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures

    NASA Technical Reports Server (NTRS)

    Sun, Joy Z.; Josh, Suresh M.

    2009-01-01

    The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.

  19. A knowledge-based system for monitoring the electrical power system of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Eddy, Pat

    1987-01-01

    The design and the prototype for the expert system for the Hubble Space Telescope's electrical power system are discussed. This prototype demonstrated the capability to use real time data from a 32k telemetry stream and to perform operational health and safety status monitoring, detect trends such as battery degradation, and detect anomalies such as solar array failures. This prototype, along with the pointing control system and data management system expert systems, forms the initial Telemetry Analysis for Lockheed Operated Spacecraft (TALOS) capability.

  20. A quantification of the effectiveness of EPID dosimetry and software-based plan verification systems in detecting incidents in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bojechko, Casey; Phillps, Mark; Kalet, Alan

    Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less

  1. Modular space vehicle boards, control software, reprogramming, and failure recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin

    A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.

  2. Distributed Health Monitoring System for Reusable Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Lin, C. F.; Figueroa, F.; Politopoulos, T.; Oonk, S.

    2009-01-01

    The ability to correctly detect and identify any possible failure in the systems, subsystems, or sensors within a reusable liquid rocket engine is a major goal at NASA John C. Stennis Space Center (SSC). A health management (HM) system is required to provide an on-ground operation crew with an integrated awareness of the condition of every element of interest by determining anomalies, examining their causes, and making predictive statements. However, the complexity associated with relevant systems, and the large amount of data typically necessary for proper interpretation and analysis, presents difficulties in implementing complete failure detection, identification, and prognostics (FDI&P). As such, this paper presents a Distributed Health Monitoring System for Reusable Liquid Rocket Engines as a solution to these problems through the use of highly intelligent algorithms for real-time FDI&P, and efficient and embedded processing at multiple levels. The end result is the ability to successfully incorporate a comprehensive HM platform despite the complexity of the systems under consideration.

  3. Underestimated prevalence of heart failure in hospital inpatients: a comparison of ICD codes and discharge letter information.

    PubMed

    Kaspar, Mathias; Fette, Georg; Güder, Gülmisal; Seidlmayer, Lea; Ertl, Maximilian; Dietrich, Georg; Greger, Helmut; Puppe, Frank; Störk, Stefan

    2018-04-17

    Heart failure is the predominant cause of hospitalization and amongst the leading causes of death in Germany. However, accurate estimates of prevalence and incidence are lacking. Reported figures originating from different information sources are compromised by factors like economic reasons or documentation quality. We implemented a clinical data warehouse that integrates various information sources (structured parameters, plain text, data extracted by natural language processing) and enables reliable approximations to the real number of heart failure patients. Performance of ICD-based diagnosis in detecting heart failure was compared across the years 2000-2015 with (a) advanced definitions based on algorithms that integrate various sources of the hospital information system, and (b) a physician-based reference standard. Applying these methods for detecting heart failure in inpatients revealed that relying on ICD codes resulted in a marked underestimation of the true prevalence of heart failure, ranging from 44% in the validation dataset to 55% (single year) and 31% (all years) in the overall analysis. Percentages changed over the years, indicating secular changes in coding practice and efficiency. Performance was markedly improved using search and permutation algorithms from the initial expert-specified query (F1 score of 81%) to the computer-optimized query (F1 score of 86%) or, alternatively, optimizing precision or sensitivity depending on the search objective. Estimating prevalence of heart failure using ICD codes as the sole data source yielded unreliable results. Diagnostic accuracy was markedly improved using dedicated search algorithms. Our approach may be transferred to other hospital information systems.

  4. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    The theory of System Health Management (SHM) and of its operational subset Fault Management (FM) states that FM is implemented as a "meta" control loop, known as an FM Control Loop (FMCL). The FMCL detects that all or part of a system is now failed, or in the future will fail (that is, cannot be controlled within acceptable limits to achieve its objectives), and takes a control action (a response) to return the system to a controllable state. In terms of control theory, the effectiveness of each FMCL is estimated based on its ability to correctly estimate the system state, and on the speed of its response to the current or impending failure effects. This paper describes how this theory has been successfully applied on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) Program to quantitatively estimate the effectiveness of proposed abort triggers so as to select the most effective suite to protect the astronauts from catastrophic failure of the SLS. The premise behind this process is to be able to quantitatively provide the value versus risk trade-off for any given abort trigger, allowing decision makers to make more informed decisions. All current and planned crewed launch vehicles have some form of vehicle health management system integrated with an emergency launch abort system to ensure crew safety. While the design can vary, the underlying principle is the same: detect imminent catastrophic vehicle failure, initiate launch abort, and extract the crew to safety. Abort triggers are the detection mechanisms that identify that a catastrophic launch vehicle failure is occurring or is imminent and cause the initiation of a notification to the crew vehicle that the escape system must be activated. While ensuring that the abort triggers provide this function, designers must also ensure that the abort triggers do not signal that a catastrophic failure is imminent when in fact the launch vehicle can successfully achieve orbit. That is, the abort triggers must have low false negative rates to be sure that real crew-threatening failures are detected, and also low false positive rates to ensure that the crew does not abort from non-crew-threatening launch vehicle behaviors. The analysis process described in this paper is a compilation of over six years of lessons learned and refinements from experiences developing abort triggers for NASA's Constellation Program (Ares I Project) and the SLS Program, as well as the simultaneous development of SHM/FM theory. The paper will describe the abort analysis concepts and process, developed in conjunction with SLS Safety and Mission Assurance (S&MA) to define a common set of mission phase, failure scenario, and Loss of Mission Environment (LOME) combinations upon which the SLS Loss of Mission (LOM) Probabilistic Risk Assessment (PRA) models are built. This abort analysis also requires strong coordination with the Multi-Purpose Crew Vehicle (MPCV) and SLS Structures and Environments (STE) to formulate a series of abortability tables that encapsulate explosion dynamics over the ascent mission phase. The design and assessment of abort conditions and triggers to estimate their Loss of Crew (LOC) Benefits also requires in-depth integration with other groups, including Avionics, Guidance, Navigation and Control(GN&C), the Crew Office, Mission Operations, and Ground Systems. The outputs of this analysis are a critical input to SLS S&MA's LOC PRA models. The process described here may well be the first full quantitative application of SHM/FM theory to the selection of a sensor suite for any aerospace system.

  5. SSME leak detection feasibility investigation by utilization of infrared sensor technology

    NASA Technical Reports Server (NTRS)

    Shohadaee, Ahmad A.; Crawford, Roger A.

    1990-01-01

    This investigation examined the potential of using state-of-the-art technology of infrared (IR) thermal imaging systems combined with computer, digital image processing and expert systems for Space Shuttle Main Engines (SSME) propellant path peak detection as an early warning system of imminent engine failure. A low-cost, laboratory experiment was devised and an experimental approach was established. The system was installed, checked out, and data were successfully acquired demonstrating the proof-of-concept. The conclusion from this investigation is that both numerical and experimental results indicate that the leak detection by using infrared sensor technology proved to be feasible for a rocket engine health monitoring system.

  6. Analysis of arrhythmic events is useful to detect lead failure earlier in patients followed by remote monitoring.

    PubMed

    Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi

    2018-03-01

    Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.

  7. 40 CFR 65.112 - Standards: Compressors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... fuel gas system, or connected by a closed vent system to a control device that meets the requirements... barrier fluid system shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be observed daily or shall be equipped with an alarm unless the...

  8. 40 CFR 63.1031 - Compressors standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... gas system or connected by a closed-vent system to a control device that meets the requirements of... service. Each barrier fluid system shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be observed daily or shall be equipped with an...

  9. 40 CFR 65.112 - Standards: Compressors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... fuel gas system, or connected by a closed vent system to a control device that meets the requirements... barrier fluid system shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be observed daily or shall be equipped with an alarm unless the...

  10. 40 CFR 63.1031 - Compressors standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... gas system or connected by a closed-vent system to a control device that meets the requirements of... service. Each barrier fluid system shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. Each sensor shall be observed daily or shall be equipped with an...

  11. Orion GN&C Fault Management System Verification: Scope And Methodology

    NASA Technical Reports Server (NTRS)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  12. Aircraft Abnormal Conditions Detection, Identification, and Evaluation Using Innate and Adaptive Immune Systems Interaction

    NASA Astrophysics Data System (ADS)

    Al Azzawi, Dia

    Abnormal flight conditions play a major role in aircraft accidents frequently causing loss of control. To ensure aircraft operation safety in all situations, intelligent system monitoring and adaptation must rely on accurately detecting the presence of abnormal conditions as soon as they take place, identifying their root cause(s), estimating their nature and severity, and predicting their impact on the flight envelope. Due to the complexity and multidimensionality of the aircraft system under abnormal conditions, these requirements are extremely difficult to satisfy using existing analytical and/or statistical approaches. Moreover, current methodologies have addressed only isolated classes of abnormal conditions and a reduced number of aircraft dynamic parameters within a limited region of the flight envelope. This research effort aims at developing an integrated and comprehensive framework for the aircraft abnormal conditions detection, identification, and evaluation based on the artificial immune systems paradigm, which has the capability to address the complexity and multidimensionality issues related to aircraft systems. Within the proposed framework, a novel algorithm was developed for the abnormal conditions detection problem and extended to the abnormal conditions identification and evaluation. The algorithm and its extensions were inspired from the functionality of the biological dendritic cells (an important part of the innate immune system) and their interaction with the different components of the adaptive immune system. Immunity-based methodologies for re-assessing the flight envelope at post-failure and predicting the impact of the abnormal conditions on the performance and handling qualities are also proposed and investigated in this study. The generality of the approach makes it applicable to any system. Data for artificial immune system development were collected from flight tests of a supersonic research aircraft within a motion-based flight simulator. The abnormal conditions considered in this work include locked actuators (stabilator, aileron, rudder, and throttle), structural damage of the wing, horizontal tail, and vertical tail, malfunctioning sensors, and reduced engine effectiveness. The results of applying the proposed approach to this wide range of abnormal conditions show its high capability in detecting the abnormal conditions with zero false alarms and very high detection rates, correctly identifying the failed subsystem and evaluating the type and severity of the failure. The results also reveal that the post-failure flight envelope can be reasonably predicted within this framework.

  13. A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution

    NASA Astrophysics Data System (ADS)

    Musani, Aatif

    The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be the best method to detect bad data. An H-infinity robust control technique was applied for the first time to design discrete EMS controller for the FREEDM system.

  14. Failure of BACTEC™ MGIT 960™ to detect Mycobacterium tuberculosis complex within a 42-day incubation period.

    PubMed

    Mahomed, Sharana; Dlamini-Mvelase, Nomonde R; Dlamini, Moses; Mlisana, Koleka

    2017-01-01

    For the optimal recovery of Mycobacterium tuberculosis from the BACTEC™ Mycobacterium Growth Indicator Tube 960™ system, an incubation period of 42-56 days is recommended by the manufacturer. Due to logistical reasons, it is common practice to follow an incubation period of 42 days. We undertook a retrospective study to document positive Mycobacterium Growth Indicator Tube cultures beyond the 42-day incubation period. In total, 98/110 (89%) were positive for M. tuberculosis complex. This alerted us to M. tuberculosis growth detection failure at 42 days.

  15. Current Status of Hybrid Bearing Damage Detection

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Certo, Joseph M.; Morales, Wilfredo

    2004-01-01

    Advances in material development and processing have led to the introduction of ceramic hybrid bearings for many applications. The introduction of silicon nitride hybrid bearings into the high pressure oxidizer turbopump, on the space shuttle main engine, led NASA to solve a highly persistent and troublesome bearing problem. Hybrid bearings consist of ceramic balls and steel races. The majority of hybrid bearings utilize Si3N4 balls. The aerospace industry is currently studying the use of hybrid bearings and naturally the failure modes of these bearings become an issue in light of the limited data available. In today s turbine engines and helicopter transmissions, the health of the bearings is detected by the properties of the debris found in the lubrication line when damage begins to occur. Current oil debris sensor technology relies on the magnetic properties of the debris to detect damage. Since the ceramic rolling elements of hybrid bearings have no metallic properties, a new sensing system must be developed to indicate the system health if ceramic components are to be safely implemented in aerospace applications. The ceramic oil debris sensor must be capable of detecting ceramic and metallic component damage with sufficient reliability and forewarning to prevent a catastrophic failure. The objective of this research is to provide a background summary on what is currently known about hybrid bearing failure modes and to report preliminary results on the detection of silicon nitride debris, in oil, using a commercial particle counter.

  16. Bearing failure detection of micro wind turbine via power spectral density analysis for stator current signals spectrum

    NASA Astrophysics Data System (ADS)

    Mahmood, Faleh H.; Kadhim, Hussein T.; Resen, Ali K.; Shaban, Auday H.

    2018-05-01

    The failure such as air gap weirdness, rubbing, and scrapping between stator and rotor generator arise unavoidably and may cause extremely terrible results for a wind turbine. Therefore, we should pay more attention to detect and identify its cause-bearing failure in wind turbine to improve the operational reliability. The current paper tends to use of power spectral density analysis method of detecting internal race and external race bearing failure in micro wind turbine by estimation stator current signal of the generator. The failure detector method shows that it is well suited and effective for bearing failure detection.

  17. Engine Icing Modeling and Simulation (Part 2): Performance Simulation of Engine Rollback Phenomena

    NASA Technical Reports Server (NTRS)

    May, Ryan D.; Guo, Ten-Huei; Veres, Joseph P.; Jorgenson, Philip C. E.

    2011-01-01

    Ice buildup in the compressor section of a commercial aircraft gas turbine engine can cause a number of engine failures. One of these failure modes is known as engine rollback: an uncommanded decrease in thrust accompanied by a decrease in fan speed and an increase in turbine temperature. This paper describes the development of a model which simulates the system level impact of engine icing using the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). When an ice blockage is added to C-MAPSS40k, the control system responds in a manner similar to that of an actual engine, and, in cases with severe blockage, an engine rollback is observed. Using this capability to simulate engine rollback, a proof-of-concept detection scheme is developed and tested using only typical engine sensors. This paper concludes that the engine control system s limit protection is the proximate cause of iced engine rollback and that the controller can detect the buildup of ice particles in the compressor section. This work serves as a feasibility study for continued research into the detection and mitigation of engine rollback using the propulsion control system.

  18. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  19. SU-E-T-421: Failure Mode and Effects Analysis (FMEA) of Xoft Electronic Brachytherapy for the Treatment of Superficial Skin Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoisak, J; Manger, R; Dragojevic, I

    Purpose: To perform a failure mode and effects analysis (FMEA) of the process for treating superficial skin cancers with the Xoft Axxent electronic brachytherapy (eBx) system, given the recent introduction of expanded quality control (QC) initiatives at our institution. Methods: A process map was developed listing all steps in superficial treatments with Xoft eBx, from the initial patient consult to the completion of the treatment course. The process map guided the FMEA to identify the failure modes for each step in the treatment workflow and assign Risk Priority Numbers (RPN), calculated as the product of the failure mode’s probability ofmore » occurrence (O), severity (S) and lack of detectability (D). FMEA was done with and without the inclusion of recent QC initiatives such as increased staffing, physics oversight, standardized source calibration, treatment planning and documentation. The failure modes with the highest RPNs were identified and contrasted before and after introduction of the QC initiatives. Results: Based on the FMEA, the failure modes with the highest RPN were related to source calibration, treatment planning, and patient setup/treatment delivery (Fig. 1). The introduction of additional physics oversight, standardized planning and safety initiatives such as checklists and time-outs reduced the RPNs of these failure modes. High-risk failure modes that could be mitigated with improved hardware and software interlocks were identified. Conclusion: The FMEA analysis identified the steps in the treatment process presenting the highest risk. The introduction of enhanced QC initiatives mitigated the risk of some of these failure modes by decreasing their probability of occurrence and increasing their detectability. This analysis demonstrates the importance of well-designed QC policies, procedures and oversight in a Xoft eBx programme for treatment of superficial skin cancers. Unresolved high risk failure modes highlight the need for non-procedural quality initiatives such as improved planning software and more robust hardware interlock systems.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Aiman; Laguna, Ignacio; Sato, Kento

    Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enablesmore » failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.« less

  1. Understanding and managing the effects of battery charger and inverter aging

    NASA Astrophysics Data System (ADS)

    Gunther, W.; Aggarwal, S.

    An aging assessment of battery chargers and inverters was conducted under the auspices of the NRC's Nuclear Plant Aging Research (NPAR) Program. The intentions of this program are to resolve issues related to the aging and service wear of equipment and systems at operating reactor facilities and to assess their impact on safety. Inverters and battery chargers are used in nuclear power plants to perform significant functions related to plant safety and availability. The specific impact of a battery charger or inverter failure varies with plant configuration. Operating experience data have demonstrated that reactor trips, safety injection system actuations, and inoperable emergency core cooling systems have resulted from inverter failures; and dc bus degradation leading to diesel generator inoperability or loss of control room annunication and indication have resulted from battery and battery charger failures. For the battery charger and inverter, the aging and service wear of subcomponents have contributed significantly to equipment failures. This paper summarizes the data and then describes methods that can be used to detect battery charger and inverter degradation prior to failure, as well as methods to minimize the failure effects. In both cases, the managing of battery charger and inverter aging is emphasized.

  2. Tribology symposium 1995. PD-Volume 72

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masudi, H.

    After the keynote presentation by Professor Aaron Cohen of Texas A and M University, entitled Processes Used in Design, the program is divided into five major sessions: Research and Development -- Recent research and development of tribological components; Tribology in Manufacturing -- The impact of tribology on modern manufacturing; Design/Design Representation -- Aspects of design related to tribological systems; Tribo-Chemistry/Tribo-Physics -- Discussion of chemical and physical behavior of substances as related to tribology; and Failure Analysis -- An analysis of failure, failure detection, and failure monitoring as related to manufacturing processes. Papers have been processed separately for inclusion on themore » data base.« less

  3. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  4. Fluorimetric Detection of a Bacillus stearothermophilus Spore-Bound Enzyme, α-d-Glucosidase, for Rapid Indication of Flash Sterilization Failure

    PubMed Central

    Vesley, Donald; Langholz, Ann C.; Rohlfing, Stephen R.; Foltz, William E.

    1992-01-01

    A biological indicator based on fluorimetric detection within 60 min of a Bacillus stearothermophilus spore-bound enzyme, α-d-glucosidase, has been developed. Results indicate that the enzyme survived slightly longer than spores observed after 24 h of incubation. The new system shows promise for evaluating flash sterilization cycles within 60 min compared with conventional 24-h systems. PMID:16348654

  5. A Robust Damage-Reporting Strategy for Polymeric Materials Enabled by Aggregation-Induced Emission.

    PubMed

    Robb, Maxwell J; Li, Wenle; Gergely, Ryan C R; Matthews, Christopher C; White, Scott R; Sottos, Nancy R; Moore, Jeffrey S

    2016-09-28

    Microscopic damage inevitably leads to failure in polymers and composite materials, but it is difficult to detect without the aid of specialized equipment. The ability to enhance the detection of small-scale damage prior to catastrophic material failure is important for improving the safety and reliability of critical engineering components, while simultaneously reducing life cycle costs associated with regular maintenance and inspection. Here, we demonstrate a simple, robust, and sensitive fluorescence-based approach for autonomous detection of damage in polymeric materials and composites enabled by aggregation-induced emission (AIE). This simple, yet powerful system relies on a single active component, and the general mechanism delivers outstanding performance in a wide variety of materials with diverse chemical and mechanical properties.

  6. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Test act system validation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of the Test Active Control Technology (ACT) System laboratory tests was to verify and validate the system concept, hardware, and software. The initial lab tests were open loop hardware tests of the Test ACT System as designed and built. During the course of the testing, minor problems were uncovered and corrected. Major software tests were run. The initial software testing was also open loop. These tests examined pitch control laws, wing load alleviation, signal selection/fault detection (SSFD), and output management. The Test ACT System was modified to interface with the direct drive valve (DDV) modules. The initial testing identified problem areas with DDV nonlinearities, valve friction induced limit cycling, DDV control loop instability, and channel command mismatch. The other DDV issue investigated was the ability to detect and isolate failures. Some simple schemes for failure detection were tested but were not completely satisfactory. The Test ACT System architecture continues to appear promising for ACT/FBW applications in systems that must be immune to worst case generic digital faults, and be able to tolerate two sequential nongeneric faults with no reduction in performance. The challenge in such an implementation would be to keep the analog element sufficiently simple to achieve the necessary reliability.

  7. 40 CFR 61.242-3 - Standards: Compressors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... barrier fluid system degassing reservoir that is routed to a process or fuel gas system or connected by a... paragraphs (a)-(c) of this section shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section...

  8. 40 CFR 61.242-3 - Standards: Compressors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... barrier fluid system degassing reservoir that is routed to a process or fuel gas system or connected by a... paragraphs (a)-(c) of this section shall be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section...

  9. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  10. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  11. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  12. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  13. 33 CFR 117.743 - Rahway River.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... lights anytime the bridge is not in the full open position. (d) An infrared sensor system shall be... the infrared sensor system. (g) If the infrared sensors detect a vessel or other obstruction.... (j) In the event of a failure, or obstruction to the infrared sensor system, the bridge shall...

  14. More About Software for No-Loss Computing

    NASA Technical Reports Server (NTRS)

    Edmonds, Iarina

    2007-01-01

    A document presents some additional information on the subject matter of "Integrated Hardware and Software for No- Loss Computing" (NPO-42554), which appears elsewhere in this issue of NASA Tech Briefs. To recapitulate: The hardware and software designs of a developmental parallel computing system are integrated to effectuate a concept of no-loss computing (NLC). The system is designed to reconfigure an application program such that it can be monitored in real time and further reconfigured to continue a computation in the event of failure of one of the computers. The design provides for (1) a distributed class of NLC computation agents, denoted introspection agents, that effects hierarchical detection of anomalies; (2) enhancement of the compiler of the parallel computing system to cause generation of state vectors that can be used to continue a computation in the event of a failure; and (3) activation of a recovery component when an anomaly is detected.

  15. Guest Editor's Introduction: Special section on dependable distributed systems

    NASA Astrophysics Data System (ADS)

    Fetzer, Christof

    1999-09-01

    We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)

  16. Testing and failure analysis to improve screening techniques for hermetically sealed metallized film capacitors for low energy applications

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Effective screening techniques are evaluated for detecting insulation resistance degradation and failure in hermetically sealed metallized film capacitors used in applications where low capacitor voltage and energy levels are common to the circuitry. A special test and monitoring system capable of rapidly scanning all test capacitors and recording faults and/or failures is examined. Tests include temperature cycling and storage as well as low, medium, and high voltage life tests. Polysulfone film capacitors are more heat stable and reliable than polycarbonate film units.

  17. Statin-associated muscular and renal adverse events: data mining of the public version of the FDA adverse event reporting system.

    PubMed

    Sakaeda, Toshiyuki; Kadoyama, Kaori; Okuno, Yasushi

    2011-01-01

    Adverse event reports (AERs) submitted to the US Food and Drug Administration (FDA) were reviewed to assess the muscular and renal adverse events induced by the administration of 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors (statins) and to attempt to determine the rank-order of the association. After a revision of arbitrary drug names and the deletion of duplicated submissions, AERs involving pravastatin, simvastatin, atorvastatin, or rosuvastatin were analyzed. Authorized pharmacovigilance tools were used for quantitative detection of signals, i.e., drug-associated adverse events, including the proportional reporting ratio, the reporting odds ratio, the information component given by a Bayesian confidence propagation neural network, and the empirical Bayes geometric mean. Myalgia, rhabdomyolysis and an increase in creatine phosphokinase level were focused on as the muscular adverse events, and acute renal failure, non-acute renal failure, and an increase in blood creatinine level as the renal adverse events. Based on 1,644,220 AERs from 2004 to 2009, signals were detected for 4 statins with respect to myalgia, rhabdomyolysis, and an increase in creatine phosphokinase level, but these signals were stronger for rosuvastatin than pravastatin and atorvastatin. Signals were also detected for acute renal failure, though in the case of atorvastatin, the association was marginal, and furthermore, a signal was not detected for non-acute renal failure or for an increase in blood creatinine level. Data mining of the FDA's adverse event reporting system, AERS, is useful for examining statin-associated muscular and renal adverse events. The data strongly suggest the necessity of well-organized clinical studies with respect to statin-associated adverse events.

  18. Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster.

    PubMed

    Fan, Hangyu; Wang, Huandong; Li, Yong

    2018-01-23

    Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance.

  19. 10th Annual Systems Engineering Conference: Volume 2 Wednesday

    DTIC Science & Technology

    2007-10-25

    intelligently optimize resource performance. Self - Healing Detect hardware/software failures and reconfigure to permit continued operations. Self ...Types Wake Ice WEAPON/PLATFORM ACOUSTICS Self -Noise Radiated Noise Beam Forming Pulse Types Submarines, surface ships, and platform sensors P r o p P r o...Computing Self -Protecting Detect internal/external attacks and protect it’s resources from exploitation. Self -Optimizing Detect sub-optimal behaviors and

  20. Joint University Program for Air Transportation Research, 1990-1991

    NASA Technical Reports Server (NTRS)

    Morrell, Frederick R. (Compiler)

    1991-01-01

    The goals of this program are consistent with the interests of both NASA and the FAA in furthering the safety and efficiency of the National Airspace System. Research carried out at the Massachusetts Institute of Technology (MIT), Ohio University, and Princeton University are covered. Topics studied include passive infrared ice detection for helicopters, the cockpit display of hazardous windshear information, fault detection and isolation for multisensor navigation systems, neural networks for aircraft system identification, and intelligent failure tolerant control.

  1. Study of fault tolerant software technology for dynamic systems

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Zacharias, G. L.

    1985-01-01

    The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.

  2. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  3. Human versus automation in responding to failures: an expected-value analysis

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Parasuraman, R.

    2000-01-01

    A simple analytical criterion is provided for deciding whether a human or automation is best for a failure detection task. The method is based on expected-value decision theory in much the same way as is signal detection. It requires specification of the probabilities of misses (false negatives) and false alarms (false positives) for both human and automation being considered, as well as factors independent of the choice--namely, costs and benefits of incorrect and correct decisions as well as the prior probability of failure. The method can also serve as a basis for comparing different modes of automation. Some limiting cases of application are discussed, as are some decision criteria other than expected value. Actual or potential applications include the design and evaluation of any system in which either humans or automation are being considered.

  4. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.

  5. Weld monitor and failure detector for nuclear reactor system

    DOEpatents

    Sutton, Jr., Harry G.

    1987-01-01

    Critical but inaccessible welds in a nuclear reactor system are monitored throughout the life of the reactor by providing small aperture means projecting completely through the reactor vessel wall and also through the weld or welds to be monitored. The aperture means is normally sealed from the atmosphere within the reactor. Any incipient failure or cracking of the weld will cause the environment contained within the reactor to pass into the aperture means and thence to the outer surface of the reactor vessel where its presence is readily detected.

  6. Failure detection of liquid cooled electronics in sealed packages. [in airborne information management system

    NASA Technical Reports Server (NTRS)

    Hoadley, A. W.; Porter, A. J.

    1991-01-01

    The theory and experimental verification of a method of detecting fluid-mass loss, expansion-chamber pressure loss, or excessive vapor build-up in NASA's Airborne Information Management System (AIMS) are presented. The primary purpose of this leak-detection method is to detect the fluid-mass loss before the volume of vapor on the liquid side causes a temperature-critical part to be out of the liquid. The method detects the initial leak after the first 2.5 pct of the liquid mass has been lost, and it can be used for detecting subsequent situations including the leaking of air into the liquid chamber and the subsequent vapor build-up.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mossahebi, S; Feigenberg, S; Nichols, E

    Purpose: GammaPod™, the first stereotactic radiotherapy device for early stage breast cancer treatment, has been recently installed and commissioned at our institution. A multidisciplinary working group applied the failure mode and effects analysis (FMEA) approach to perform a risk analysis. Methods: FMEA was applied to the GammaPod™ treatment process by: 1) generating process maps for each stage of treatment; 2) identifying potential failure modes and outlining their causes and effects; 3) scoring the potential failure modes using the risk priority number (RPN) system based on the product of severity, frequency of occurrence, and detectability (ranging 1–10). An RPN of highermore » than 150 was set as the threshold for minimal concern of risk. For these high-risk failure modes, potential quality assurance procedures and risk control techniques have been proposed. A new set of severity, occurrence, and detectability values were re-assessed in presence of the suggested mitigation strategies. Results: In the single-day image-and-treat workflow, 19, 22, and 27 sub-processes were identified for the stages of simulation, treatment planning, and delivery processes, respectively. During the simulation stage, 38 potential failure modes were found and scored, in terms of RPN, in the range of 9-392. 34 potential failure modes were analyzed in treatment planning with a score range of 16-200. For the treatment delivery stage, 47 potential failure modes were found with an RPN score range of 16-392. The most critical failure modes consisted of breast-cup pressure loss and incorrect target localization due to patient upper-body alignment inaccuracies. The final RPN score of these failure modes based on recommended actions were assessed to be below 150. Conclusion: FMEA risk analysis technique was applied to the treatment process of GammaPod™, a new stereotactic radiotherapy technology. Application of systematic risk analysis methods is projected to lead to improved quality of GammaPod™ treatments. Ying Niu and Cedric Yu are affiliated with Xcision Medical Systems.« less

  8. Continuous ECG Monitoring in Patients With Acute Coronary Syndrome or Heart Failure: EASI Versus Gold Standard.

    PubMed

    Lancia, Loreto; Toccaceli, Andrea; Petrucci, Cristina; Romano, Silvio; Penco, Maria

    2018-05-01

    The purpose of the study was to compare the EASI system with the standard 12-lead surface electrocardiogram (ECG) for the accuracy in detecting the main electrocardiographic parameters (J point, PR, QT, and QRS) commonly monitored in patients with acute coronary syndromes or heart failure. In this observational comparative study, 253 patients who were consecutively admitted to the coronary care unit with acute coronary syndrome or heart failure were evaluated. In all patients, two complete 12-lead ECGs were acquired simultaneously. A total of 6,072 electrocardiographic leads were compared (3,036 standard and 3,036 EASI). No significant differences were found between the investigate parameters of the two measurement methods, either in patients with acute coronary syndrome or in those with heart failure. This study confirmed the accuracy of the EASI system in monitoring the main ECG parameters in patients admitted to the coronary care unit with acute coronary syndrome or heart failure.

  9. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    NASA Astrophysics Data System (ADS)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  10. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  11. Failure Modes Effects and Criticality Analysis, an Underutilized Safety, Reliability, Project Management and Systems Engineering Tool

    NASA Astrophysics Data System (ADS)

    Mullin, Daniel Richard

    2013-09-01

    The majority of space programs whether manned or unmanned for science or exploration require that a Failure Modes Effects and Criticality Analysis (FMECA) be performed as part of their safety and reliability activities. This comes as no surprise given that FMECAs have been an integral part of the reliability engineer's toolkit since the 1950s. The reasons for performing a FMECA are well known including fleshing out system single point failures, system hazards and critical components and functions. However, in the author's ten years' experience as a space systems safety and reliability engineer, findings demonstrate that the FMECA is often performed as an afterthought, simply to meet contract deliverable requirements and is often started long after the system requirements allocation and preliminary design have been completed. There are also important qualitative and quantitative components often missing which can provide useful data to all of project stakeholders. These include; probability of occurrence, probability of detection, time to effect and time to detect and, finally, the Risk Priority Number. This is unfortunate as the FMECA is a powerful system design tool that when used effectively, can help optimize system function while minimizing the risk of failure. When performed as early as possible in conjunction with writing the top level system requirements, the FMECA can provide instant feedback on the viability of the requirements while providing a valuable sanity check early in the design process. It can indicate which areas of the system will require redundancy and which areas are inherently the most risky from the onset. Based on historical and practical examples, it is this author's contention that FMECAs are an immense source of important information for all involved stakeholders in a given project and can provide several benefits including, efficient project management with respect to cost and schedule, system engineering and requirements management, assembly integration and test (AI&T) and operations if applied early, performed to completion and updated along with system design.

  12. The internal model: A study of the relative contribution of proprioception and visual information to failure detection in dynamic systems. [sensitivity of operators versus monitors to failures

    NASA Technical Reports Server (NTRS)

    Kessel, C.; Wickens, C. D.

    1978-01-01

    The development of the internal model as it pertains to the detection of step changes in the order of control dynamics is investigated for two modes of participation: whether the subjects are actively controlling those dynamics or are monitoring an autopilot controlling them. A transfer of training design was used to evaluate the relative contribution of proprioception and visual information to the overall accuracy of the internal model. Sixteen subjects either tracked or monitored the system dynamics as a 2-dimensional pursuit display under single task conditions and concurrently with a sub-critical tracking task at two difficulty levels. Detection performance was faster and more accurate in the manual as opposed to the autopilot mode. The concurrent tracking task produced a decrement in detection performance for all conditions though this was more marked for the manual mode. The development of an internal model in the manual mode transferred positively to the automatic mode producing enhanced detection performance. There was no transfer from the internal model developed in the automatic mode to the manual mode.

  13. 29 CFR 1926.903 - Underground transportation of explosives.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Trucks used for the transportation of explosives underground shall have the electrical system checked weekly to detect any failures which may constitute an electrical hazard. A certification record which... powered by the truck's electrical system, shall be prohibited. (g) Explosives and blasting agents shall be...

  14. 29 CFR 1926.903 - Underground transportation of explosives.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Trucks used for the transportation of explosives underground shall have the electrical system checked weekly to detect any failures which may constitute an electrical hazard. A certification record which... powered by the truck's electrical system, shall be prohibited. (g) Explosives and blasting agents shall be...

  15. 29 CFR 1926.903 - Underground transportation of explosives.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Trucks used for the transportation of explosives underground shall have the electrical system checked weekly to detect any failures which may constitute an electrical hazard. A certification record which... powered by the truck's electrical system, shall be prohibited. (g) Explosives and blasting agents shall be...

  16. 29 CFR 1926.903 - Underground transportation of explosives.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Trucks used for the transportation of explosives underground shall have the electrical system checked weekly to detect any failures which may constitute an electrical hazard. A certification record which... powered by the truck's electrical system, shall be prohibited. (g) Explosives and blasting agents shall be...

  17. 29 CFR 1926.903 - Underground transportation of explosives.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Trucks used for the transportation of explosives underground shall have the electrical system checked weekly to detect any failures which may constitute an electrical hazard. A certification record which... powered by the truck's electrical system, shall be prohibited. (g) Explosives and blasting agents shall be...

  18. A vector-based failure detection and isolation algorithm for a dual fail-operational redundant strapdown inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Morrell, Frederick R.; Bailey, Melvin L.

    1987-01-01

    A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.

  19. Digital electronic engine control fault detection and accommodation flight evaluation

    NASA Technical Reports Server (NTRS)

    Baer-Ruedhart, J. L.

    1984-01-01

    The capabilities and performance of various fault detection and accommodation (FDA) schemes in existing and projected engine control systems were investigated. Flight tests of the digital electronic engine control (DEEC) in an F-15 aircraft show discrepancies between flight results and predictions based on simulation and altitude testing. The FDA methodology and logic in the DEEC system, and the results of the flight failures which occurred to date are described.

  20. Malpractice claims related to musculoskeletal imaging. Incidence and anatomical location of lesions.

    PubMed

    Fileni, Adriano; Fileni, Gaia; Mirk, Paoletta; Magnavita, Giulia; Nicoli, Marzia; Magnavita, Nicola

    2013-12-01

    Failure to detect lesions of the musculoskeletal system is a frequent cause of malpractice claims against radiologists. We examined all the malpractice claims related to alleged errors in musculoskeletal imaging filed against Italian radiologists over a period of 14 years (1993-2006). During the period considered, a total of 416 claims for alleged diagnostic errors relating to the musculoskeletal system were filed against radiologists; of these, 389 (93.5%) concerned failure to report fractures, and 15 (3.6%) failure to diagnose a tumour. Incorrect interpretation of bone pathology is among the most common causes of litigation against radiologists; alone, it accounts for 36.4% of all malpractice claims filed during the observation period. Awareness of this risk should encourage extreme caution and diligence.

  1. Apparatus for sensor failure detection and correction in a gas turbine engine control system

    NASA Technical Reports Server (NTRS)

    Spang, H. A., III; Wanger, R. P. (Inventor)

    1981-01-01

    A gas turbine engine control system maintains a selected level of engine performance despite the failure or abnormal operation of one or more engine parameter sensors. The control system employs a continuously updated engine model which simulates engine performance and generates signals representing real time estimates of the engine parameter sensor signals. The estimate signals are transmitted to a control computational unit which utilizes them in lieu of the actual engine parameter sensor signals to control the operation of the engine. The estimate signals are also compared with the corresponding actual engine parameter sensor signals and the resulting difference signals are utilized to update the engine model. If a particular difference signal exceeds specific tolerance limits, the difference signal is inhibited from updating the model and a sensor failure indication is provided to the engine operator.

  2. Management of redundancy in flight control systems using optimal decision theory

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The problem of using redundancy that exists between dissimilar systems in aircraft flight control is addressed. That is, using the redundancy that exists between a rate gyro and an accelerometer--devices that have dissimilar outputs which are related only through the dynamics of the aircraft motion. Management of this type of redundancy requires advanced logic so that the system can monitor failure status and can reconfigure itself in the event of one or more failures. An optimal decision theory was tutorially developed for the management of sensor redundancy and the theory is applied to two aircraft examples. The first example is the space shuttle and the second is a highly maneuvering high performance aircraft--the F8-C. The examples illustrate the redundancy management design process and the performance of the algorithms presented in failure detection and control law reconfiguration.

  3. Acoustical Detection Of Leakage In A Combustor

    NASA Technical Reports Server (NTRS)

    Puster, Richard L.; Petty, Jeffrey L.

    1993-01-01

    Abnormal combustion excites characteristic standing wave. Acoustical leak-detection system gives early warning of failure, enabling operating personnel to stop combustion process and repair spray bar before leak grows large enough to cause damage. Applicable to engines, gas turbines, furnaces, and other machines in which acoustic emissions at known frequencies signify onset of damage. Bearings in rotating machines monitored for emergence of characteristic frequencies shown in previous tests associated with incipient failure. Also possible to monitor for signs of trouble at multiple frequencies by feeding output of transducer simultaneously to multiple band-pass filters and associated circuitry, including separate trigger circuit set to appropriate level for each frequency.

  4. Fault detection and accommodation testing on an F100 engine in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Myers, L. P.; Baer-Riedhart, J. L.; Maxwell, M. D.

    1985-01-01

    The fault detection and accommodation (FDA) methodology for digital engine-control systems may range from simple comparisons of redundant parameters to the more complex and sophisticated observer models of the entire engine system. Evaluations of the various FDA schemes are done using analytical methods, simulation, and limited-altitude-facility testing. Flight testing of the FDA logic has been minimal because of the difficulty of inducing realistic faults in flight. A flight program was conducted to evaluate the fault detection and accommodation capability of a digital electronic engine control in an F-15 aircraft. The objective of the flight program was to induce selected faults and evaluate the resulting actions of the digital engine controller. Comparisons were made between the flight results and predictions. Several anomalies were found in flight and during the ground test. Simulation results showed that the inducement of dual pressure failures was not feasible since the FDA logic was not designed to accommodate these types of failures.

  5. General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark

    2010-01-01

    Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.

  6. 40 CFR 63.164 - Standards: Compressors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... process or fuel gas system or connected by a closed-vent system to a control device that complies with the... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an...

  7. 40 CFR 63.164 - Standards: Compressors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... process or fuel gas system or connected by a closed-vent system to a control device that complies with the... with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be observed daily or shall be equipped with an...

  8. 40 CFR 60.482-3 - Standards: Compressors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... process or fuel gas system or connected by a closed vent system to a control device that complies with the... be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) shall be checked daily or shall be equipped with an...

  9. 40 CFR 60.482-3 - Standards: Compressors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... process or fuel gas system or connected by a closed vent system to a control device that complies with the... be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) shall be checked daily or shall be equipped with an...

  10. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    PubMed Central

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software∕hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%–3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation of clinical resources because the most critical failure modes receive the most attention. It is expected that the set of guidelines proposed here will serve as a living document that is updated with the accumulation of progressively more intrainstitutional and interinstitutional experience with DMLC tracking. PMID:21302802

  11. Independent Orbiter Assessment (IOA): Analysis of the life support and airlock support subsystems

    NASA Technical Reports Server (NTRS)

    Arbet, Jim; Duffy, R.; Barickman, K.; Saiidi, Mo J.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Life Support System (LSS) and Airlock Support System (ALSS). Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. The LSS provides for the management of the supply water, collection of metabolic waste, management of waste water, smoke detection, and fire suppression. The ALSS provides water, oxygen, and electricity to support an extravehicular activity in the airlock.

  12. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

    DOEpatents

    Gross, Kenny C.

    1994-01-01

    Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as "background" gases, further reducing the number of trial node combinations. Lastly, a "fuzzy" set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements.

  13. Improving the treatment planning and delivery process of Xoft electronic skin brachytherapy.

    PubMed

    Manger, Ryan; Rahn, Douglas; Hoisak, Jeremy; Dragojević, Irena

    2018-05-14

    To develop an improved Xoft electronic skin brachytherapy process and identify areas of further improvement. A multidisciplinary team conducted a failure modes and effects analysis (FMEA) by developing a process map and a corresponding list of failure modes. The failure modes were scored for their occurrence, severity, and detectability, and a risk priority number (RPN) was calculated for each failure mode as the product of occurrence, severity, and detectability. Corrective actions were implemented to address the higher risk failure modes, and a revised process was generated. The RPNs of the failure modes were compared between the initial process and final process to assess the perceived benefits of the corrective actions. The final treatment process consists of 100 steps and 114 failure modes. The FMEA took approximately 20 person-hours (one physician, three physicists, and two therapists) to complete. The 10 most dangerous failure modes had RPNs ranging from 336 to 630. Corrective actions were effective at addressing most failure modes (10 riskiest RPNs ranging from 189 to 310), yet the RPNs were higher than those published for alternative systems. Many of these high-risk failure modes remained due to hardware design limitations. FMEA helps guide process improvement efforts by emphasizing the riskiest steps. Significant risks are apparent when using a Xoft treatment unit for skin brachytherapy due to hardware limitations such as the lack of several interlocks, a short source lifespan, and variability in source output. The process presented in this article is expected to reduce but not eliminate these risks. Copyright © 2018 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  14. 40 CFR 60.482-3a - Standards: Compressors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (2) Equipped with a barrier fluid system degassing reservoir that is routed to a process or fuel gas... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped...

  15. 40 CFR 60.482-3a - Standards: Compressors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (2) Equipped with a barrier fluid system degassing reservoir that is routed to a process or fuel gas... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped...

  16. System and Method for Outlier Detection via Estimating Clusters

    NASA Technical Reports Server (NTRS)

    Iverson, David J. (Inventor)

    2016-01-01

    An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.

  17. Time-Frequency Methods for Structural Health Monitoring †

    PubMed Central

    Pyayt, Alexander L.; Kozionov, Alexey P.; Mokhov, Ilya I.; Lang, Bernhard; Meijer, Robert J.; Krzhizhanovskaya, Valeria V.; Sloot, Peter M. A.

    2014-01-01

    Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM) of flood protection systems (levees, earthen dikes and concrete dams) using sensor data. We present a robust data-driven anomaly detection method that combines time-frequency feature extraction, using wavelet analysis and phase shift, with one-sided classification techniques to identify the onset of failure anomalies in real-time sensor measurements. The methodology has been successfully tested at three operational levees. We detected a dam leakage in the retaining dam (Germany) and “strange” behaviour of sensors installed in a Boston levee (UK) and a Rhine levee (Germany). PMID:24625740

  18. Multidrug-resistant tuberculosis treatment failure detection depends on monitoring interval and microbiological method

    PubMed Central

    White, Richard A.; Lu, Chunling; Rodriguez, Carly A.; Bayona, Jaime; Becerra, Mercedes C.; Burgos, Marcos; Centis, Rosella; Cohen, Theodore; Cox, Helen; D'Ambrosio, Lia; Danilovitz, Manfred; Falzon, Dennis; Gelmanova, Irina Y.; Gler, Maria T.; Grinsdale, Jennifer A.; Holtz, Timothy H.; Keshavjee, Salmaan; Leimane, Vaira; Menzies, Dick; Milstein, Meredith B.; Mishustin, Sergey P.; Pagano, Marcello; Quelapio, Maria I.; Shean, Karen; Shin, Sonya S.; Tolman, Arielle W.; van der Walt, Martha L.; Van Deun, Armand; Viiklepp, Piret

    2016-01-01

    Debate persists about monitoring method (culture or smear) and interval (monthly or less frequently) during treatment for multidrug-resistant tuberculosis (MDR-TB). We analysed existing data and estimated the effect of monitoring strategies on timing of failure detection. We identified studies reporting microbiological response to MDR-TB treatment and solicited individual patient data from authors. Frailty survival models were used to estimate pooled relative risk of failure detection in the last 12 months of treatment; hazard of failure using monthly culture was the reference. Data were obtained for 5410 patients across 12 observational studies. During the last 12 months of treatment, failure detection occurred in a median of 3 months by monthly culture; failure detection was delayed by 2, 7, and 9 months relying on bimonthly culture, monthly smear and bimonthly smear, respectively. Risk (95% CI) of failure detection delay resulting from monthly smear relative to culture is 0.38 (0.34–0.42) for all patients and 0.33 (0.25–0.42) for HIV-co-infected patients. Failure detection is delayed by reducing the sensitivity and frequency of the monitoring method. Monthly monitoring of sputum cultures from patients receiving MDR-TB treatment is recommended. Expanded laboratory capacity is needed for high-quality culture, and for smear microscopy and rapid molecular tests. PMID:27587552

  19. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  20. A Genuine TEAM Player

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.

  1. A case study in nonconformance and performance trend analysis

    NASA Technical Reports Server (NTRS)

    Maloy, Joseph E.; Newton, Coy P.

    1990-01-01

    As part of NASA's effort to develop an agency-wide approach to trend analysis, a pilot nonconformance and performance trending analysis study was conducted on the Space Shuttle auxiliary power unit (APU). The purpose of the study was to (1) demonstrate that nonconformance analysis can be used to identify repeating failures of a specific item (and the associated failure modes and causes) and (2) determine whether performance parameters could be analyzed and monitored to provide an indication of component or system degradation prior to failure. The nonconformance analysis of the APU did identify repeating component failures, which possibly could be reduced if key performance parameters were monitored and analyzed. The performance-trending analysis verified that the characteristics of hardware parameters can be effective in detecting degradation of hardware performance prior to failure.

  2. Metaiodobenzylguanidine (/sup 131/I) scintigraphy detects impaired myocardial sympathetic neuronal transport function of canine mechanical-overload heart failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabinovitch, M.A.; Rose, C.P.; Rouleau, J.L.

    1987-12-01

    In heart failure secondary to chronic mechanical overload, cardiac sympathetic neurons demonstrate depressed catecholamine synthetic and transport function. To assess the potential of sympathetic neuronal imaging for detection of depressed transport function, serial scintigrams were acquired after the intravenous administration of metaiodobenzylguanidine (/sup 131/I) to 13 normal dogs, 3 autotransplanted (denervated) dogs, 5 dogs with left ventricular failure, and 5 dogs with compensated left ventricular hypertrophy due to a surgical arteriovenous shunt. Nine dogs were killed at 14 hours postinjection for determination of metaiodobenzylguanidine (/sup 131/I) and endogenous norepinephrine content in left atrium, left ventricle, liver, and spleen. By 4more » hours postinjection, autotransplanted dogs had a 39% reduction in mean left ventricular tracer accumulation, reflecting an absent intraneuronal tracer pool. Failure dogs demonstrated an accelerated early mean left ventricular tracer efflux rate (26.0%/hour versus 13.7%/hour in normals), reflecting a disproportionately increased extraneuronal tracer pool. They also showed reduced late left ventricular and left atrial concentrations of tracer, consistent with a reduced intraneuronal tracer pool. By contrast, compensated hypertrophy dogs demonstrated a normal early mean left ventricular tracer efflux rate (16.4%/hour) and essentially normal late left ventricular and left atrial concentrations of tracer. Metaiodobenzylguanidine (/sup 131/I) scintigraphic findings reflect the integrity of the cardiac sympathetic neuronal transport system in canine mechanical-overload heart failure. Metaiodobenzylguanidine (/sup 123/I) scintigraphy should be explored as a means of early detection of mechanical-overload heart failure in patients.« less

  3. Development and testing of an algorithm to detect implantable cardioverter-defibrillator lead failure.

    PubMed

    Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A

    2006-02-01

    Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was <0.35 mV-mV, then the detected rhythm was considered noise due to a lead failure. The first ICD-detected episode of lead failure and inappropriate detection from 24 ICD patients with a pace/sense lead failure and all ventricular arrhythmias from 56 ICD patients without a lead failure were selected. The stored data were analyzed to determine the sensitivity and specificity of the algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.

  4. Demonstrating the Safety and Reliability of a New System or Spacecraft: Incorporating Analyses and Reviews of the Design and Processing in Determining the Number of Tests to be Conducted

    NASA Technical Reports Server (NTRS)

    Vesely, William E.; Colon, Alfredo E.

    2010-01-01

    Design Safety/Reliability is associated with the probability of no failure-causing faults existing in a design. Confidence in the non-existence of failure-causing faults is increased by performing tests with no failure. Reliability-Growth testing requirements are based on initial assurance and fault detection probability. Using binomial tables generally gives too many required tests compared to reliability-growth requirements. Reliability-Growth testing requirements are based on reliability principles and factors and should be used.

  5. Framework for a space shuttle main engine health monitoring system

    NASA Technical Reports Server (NTRS)

    Hawman, Michael W.; Galinaitis, William S.; Tulpule, Sharayu; Mattedi, Anita K.; Kamenetz, Jeffrey

    1990-01-01

    A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available.

  6. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  7. Development of three-axis inkjet printer for gear sensors

    NASA Astrophysics Data System (ADS)

    Iba, Daisuke; Rodriguez Lopez, Ricardo; Kamimoto, Takahiro; Nakamura, Morimasa; Miura, Nanako; Iizuka, Takashi; Masuda, Arata; Moriwaki, Ichiro; Sone, Akira

    2016-04-01

    The long-term objective of our research is to develop sensor systems for detection of gear failure signs. As a very first step, this paper proposes a new method to create sensors directly printed on gears by a printer and conductive ink, and shows the printing system configuration and the procedure of sensor development. The developing printer system is a laser sintering system consisting of a laser and CNC machinery. The laser is able to synthesize micro conductive patterns, and introduced to the CNC machinery as a tool. In order to synthesize sensors on gears, we first design the micro-circuit pattern on a gear through the use of 3D-CAD, and create a program (G-code) for the CNC machinery by CAM. This paper shows initial experiments with the laser sintering process in order to obtain the optimal parameters for the laser setting. This new method proposed here may provide a new manufacturing process for mechanical parts, which have an additional functionality to detect failure, and possible improvements include creating more economical and sustainable systems.

  8. 40 CFR 60.482-3a - Standards: Compressors.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of VOC in the Synthetic Organic Chemicals Manufacturing Industry for Which Construction... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped...

  9. 40 CFR 60.482-3 - Standards: Compressors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of VOC in the Synthetic Organic Chemicals Manufacturing Industry for which Construction... be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) shall be checked daily or shall be equipped with an...

  10. 40 CFR 60.482-3 - Standards: Compressors.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of VOC in the Synthetic Organic Chemicals Manufacturing Industry for which Construction... be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) shall be checked daily or shall be equipped with an...

  11. 40 CFR 60.482-3a - Standards: Compressors.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of VOC in the Synthetic Organic Chemicals Manufacturing Industry for Which Construction... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped...

  12. 40 CFR 60.482-3 - Standards: Compressors.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of VOC in the Synthetic Organic Chemicals Manufacturing Industry for which Construction... be equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) shall be checked daily or shall be equipped with an...

  13. 40 CFR 60.482-2a - Standards: Pumps in light liquid service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... routed to a process or fuel gas system or connected by a closed vent system to a control device that... sensor that will detect failure of the seal system, the barrier fluid system, or both. (4)(i) Each pump... indications of liquids dripping as a leak. (5)(i) Each sensor as described in paragraph (d)(3) is checked...

  14. Systemic tobramycin concentrations during selective decontamination of the digestive tract in intensive care unit patients on continuous venovenous hemofiltration.

    PubMed

    Mol, Meriel; van Kan, Hendrikus J M; Schultz, Marcus J; de Jonge, Evert

    2008-05-01

    To study whether selective decontamination of the digestive tract (SDD) results in detectable serum tobramycin concentrations in intensive care unit (ICU) patients with acute renal failure treated with continuous venovenous hemofiltration (CVVH). Prospective, observational, single-center study in a mixed medical-surgical ICU. Adult ICU patients receiving SDD for at least 3 days and being treated with CVVH because of acute renal failure. Tobramycin serum concentrations were measured at the 3rd day after start of CVVH and every 3 days thereafter. Detectable serum concentrations of tobramycin were found in 12 (63%) of 19 patients and in 15 (58%) of the 26 samples. With a toxic tobramycin concentration defined as more than 2.0 mg/l, we found one patient with a toxic concentration of 3.0 mg/l. In three other patients tobramycin concentrations of >or=1.0 mg/l were found. In patients with acute renal failure treated with CVVH, administration of SDD with tobramycin can lead to detectable and potentially toxic serum tobramycin concentrations.

  15. Accidental Beam Losses and Protection in the LHC

    NASA Astrophysics Data System (ADS)

    Schmidt, R.; Working Group On Machine Protection

    2005-06-01

    At top energy (proton momentum 7 TeV/c) with nominal beam parameters, each of the two LHC proton beams has a stored energy of 350 MJ threatening to damage accelerator equipment in case of accidental beam loss. It is essential that the beams are properly extracted onto the dump blocks in case of failure since these are the only elements that can withstand full beam impact. Although the energy stored in the beams at injection (450 GeV/c) is about 15 times smaller compared to top energy, the beams must still be properly extracted in case of large accidental beam losses. Failures must be detected at a sufficiently early stage and initiate a beam dump. Quenches and power converter failures will be detected by monitoring the correct functioning of the hardware systems. In addition, safe operation throughout the cycle requires the use of beam loss monitors, collimators and absorbers. Ideas of detection of fast beam current decay, monitoring of fast beam position changes and monitoring of fast magnet current changes are discussed, to provide the required redundancy for machine protection.

  16. Paralex: An Environment for Parallel Programming in Distributed Systems

    DTIC Science & Technology

    1991-12-07

    distributed systems is coni- parable to assembly language programming for traditional sequential systems - the user must resort to low-level primitives ...to accomplish data encoding/decoding, communication, remote exe- cution, synchronization , failure detection and recovery. It is our belief that... synchronization . Finally, composing parallel programs by interconnecting se- quential computations allows automatic support for heterogeneity and fault tolerance

  17. Determination of failure limits for sterilizable solid rocket motor

    NASA Technical Reports Server (NTRS)

    Lambert, W. L.; Mastrolia, E. J.; Mcconnell, J. D.

    1974-01-01

    A structural evaluation to establish probable failure limits and a series of environmental tests involving temperature cycling, sustained acceleration, and vibration were conducted on an 18-inch diameter solid rocket motor. Despite the fact that thermal, acceleration and vibration loads representing a severe overtest of conventional environmental requirements were imposed on the sterilizable motor, no structural failure of the grain or flexible support system was detected. The following significant conclusions are considered justified. It is concluded that: (1) the flexible grain retention system, which permitted heat sterilization at 275 F on the test motor, can readily be adopted to meet the environmental requirements of an operational motor design, and (2) if further substantiation of structural integrity is desired, the motor used is considered acceptable for static firing.

  18. Achieving fast and stable failure detection in WDM Networks

    NASA Astrophysics Data System (ADS)

    Gao, Donghui; Zhou, Zhiyu; Zhang, Hanyi

    2005-02-01

    In dynamic networks, the failure detection time takes a major part of the convergence time, which is an important network performance index. To detect a node or link failure in the network, traditional protocols, like Hello protocol in OSPF or RSVP, exchanges keep-alive messages between neighboring nodes to keep track of the link/node state. But by default settings, it can get a minimum detection time in the measure of dozens of seconds, which can not meet the demands of fast network convergence and failure recovery. When configuring the related parameters to reduce the detection time, there will be notable instability problems. In this paper, we analyzed the problem and designed a new failure detection algorithm to reduce the network overhead of detection signaling. Through our experiment we found it is effective to enhance the stability by implicitly acknowledge other signaling messages as keep-alive messages. We conducted our proposal and the previous approaches on the ASON test-bed. The experimental results show that our algorithm gives better performances than previous schemes in about an order magnitude reduction of both false failure alarms and queuing delay to other messages, especially under light traffic load.

  19. Towards Comprehensive Variation Models for Designing Vehicle Monitoring Systems

    NASA Technical Reports Server (NTRS)

    McAdams, Daniel A.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing vehicle vibration monitoring systems for aerospace devices, it is common to use well-established models of vibration features to determine whether failures or defects exist. Most of the algorithms used for failure detection rely on these models to detect significant changes in a flight environment. In actual practice, however, most vehicle vibration monitoring systems are corrupted by high rates of false alarms and missed detections. This crucial roadblock makes their implementation in real vehicles (e.g., helicopter transmissions and aircraft engines) difficult, making their operation costly and unreliable. Research conducted at the NASA Ames Research Center has determined that a major reason for the high rates of false alarms and missed detections is the numerous sources of statistical variations that are not taken into account in the modeling assumptions. In this paper, we address one such source of variations, namely, those caused during the design and manufacturing of rotating machinery components that make up aerospace systems. We present a novel way of modeling the vibration response by including design variations via probabilistic methods. Using such models, we develop a methodology to account for design and manufacturing variations, and explore the changes in the vibration response to determine its stochastic nature. We explore the potential of the methodology using a nonlinear cam-follower model, where the spring stiffness values are assumed to follow a normal distribution. The results demonstrate initial feasibility of the method, showing great promise in developing a general methodology for designing more accurate aerospace vehicle monitoring systems.

  20. Protection of the CERN Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Schmidt, R.; Assmann, R.; Carlier, E.; Dehning, B.; Denz, R.; Goddard, B.; Holzer, E. B.; Kain, V.; Puccio, B.; Todd, B.; Uythoven, J.; Wenninger, J.; Zerlauth, M.

    2006-11-01

    The Large Hadron Collider (LHC) at CERN will collide two counter-rotating proton beams, each with an energy of 7 TeV. The energy stored in the superconducting magnet system will exceed 10 GJ, and each beam has a stored energy of 362 MJ which could cause major damage to accelerator equipment in the case of uncontrolled beam loss. Safe operation of the LHC will therefore rely on a complex system for equipment protection. The systems for protection of the superconducting magnets in case of quench must be fully operational before powering the magnets. For safe injection of the 450 GeV beam into the LHC, beam absorbers must be in their correct positions and specific procedures must be applied. Requirements for safe operation throughout the cycle necessitate early detection of failures within the equipment, and active monitoring of the beam with fast and reliable beam instrumentation, mainly beam loss monitors (BLM). When operating with circulating beams, the time constant for beam loss after a failure extends from apms to a few minutes—failures must be detected sufficiently early and transmitted to the beam interlock system that triggers a beam dump. It is essential that the beams are properly extracted on to the dump blocks at the end of a fill and in case of emergency, since the beam dump blocks are the only elements of the LHC that can withstand the impact of the full beam.

  1. Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster

    PubMed Central

    Fan, Hangyu; Wang, Huandong; Li, Yong

    2018-01-01

    Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance. PMID:29360792

  2. FDIR Strategy Validation with the B Method

    NASA Astrophysics Data System (ADS)

    Sabatier, D.; Dellandrea, B.; Chemouil, D.

    2008-08-01

    In a formation flying satellite system, the FDIR strategy (Failure Detection, Isolation and Recovery) is paramount. When a failure occurs, satellites should be able to take appropriate reconfiguration actions to obtain the best possible results given the failure, ranging from avoiding satellite-to-satellite collision to continuing the mission without disturbance if possible. To achieve this goal, each satellite in the formation has an implemented FDIR strategy that governs how it detects failures (from tests or by deduction) and how it reacts (reconfiguration using redundant equipments, avoidance manoeuvres, etc.). The goal is to protect the satellites first and the mission as much as possible. In a project initiated by the CNES, ClearSy experiments the B Method to validate the FDIR strategies developed by Thales Alenia Space, of the inter satellite positioning and communication devices that will be used for the SIMBOL-X (2 satellite configuration) and the PEGASE (3 satellite configuration) missions and potentially for other missions afterward. These radio frequency metrology sensor devices provide satellite positioning and inter satellite communication in formation flying. This article presents the results of this experience.

  3. New early warning system for gravity-driven ruptures based on codetection of acoustic signal

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.

    2016-12-01

    Gravity-driven rupture phenomena in natural media - e.g. landslide, rockfalls, snow or ice avalanches - represent an important class of natural hazards in mountainous regions. To protect the population against such events, a timely evacuation often constitutes the only effective way to secure the potentially endangered area. However, reliable prediction of imminence of such failure events remains challenging due to the nonlinear and complex nature of geological material failure hampered by inherent heterogeneity, unknown initial mechanical state, and complex load application (rainfall, temperature, etc.). Here, a simple method for real-time early warning that considers both the heterogeneity of natural media and characteristics of acoustic emissions attenuation is proposed. This new method capitalizes on codetection of elastic waves emanating from microcracks by multiple and spatially separated sensors. Event-codetection is considered as surrogate for large event size with more frequent codetected events (i.e., detected concurrently on more than one sensor) marking imminence of catastrophic failure. Simple numerical model based on a Fiber Bundle Model considering signal attenuation and hypothetical arrays of sensors confirms the early warning potential of codetection principles. Results suggest that although statistical properties of attenuated signal amplitude could lead to misleading results, monitoring the emergence of large events announcing impeding failure is possible even with attenuated signals depending on sensor network geometry and detection threshold. Preliminary application of the proposed method to acoustic emissions during failure of snow samples has confirmed the potential use of codetection as indicator for imminent failure at lab scale. The applicability of such simple and cheap early warning system is now investigated at a larger scale (hillslope). First results of such a pilot field experiment are presented and analysed.

  4. Long Term Safety Area Tracking (LT-SAT) with online failure detection and recovery for robotic minimally invasive surgery.

    PubMed

    Penza, Veronica; Du, Xiaofei; Stoyanov, Danail; Forgione, Antonello; Mattos, Leonardo S; De Momi, Elena

    2018-04-01

    Despite the benefits introduced by robotic systems in abdominal Minimally Invasive Surgery (MIS), major complications can still affect the outcome of the procedure, such as intra-operative bleeding. One of the causes is attributed to accidental damages to arteries or veins by the surgical tools, and some of the possible risk factors are related to the lack of sub-surface visibilty. Assistive tools guiding the surgical gestures to prevent these kind of injuries would represent a relevant step towards safer clinical procedures. However, it is still challenging to develop computer vision systems able to fulfill the main requirements: (i) long term robustness, (ii) adaptation to environment/object variation and (iii) real time processing. The purpose of this paper is to develop computer vision algorithms to robustly track soft tissue areas (Safety Area, SA), defined intra-operatively by the surgeon based on the real-time endoscopic images, or registered from a pre-operative surgical plan. We propose a framework to combine an optical flow algorithm with a tracking-by-detection approach in order to be robust against failures caused by: (i) partial occlusion, (ii) total occlusion, (iii) SA out of the field of view, (iv) deformation, (v) illumination changes, (vi) abrupt camera motion, (vii), blur and (viii) smoke. A Bayesian inference-based approach is used to detect the failure of the tracker, based on online context information. A Model Update Strategy (MUpS) is also proposed to improve the SA re-detection after failures, taking into account the changes of appearance of the SA model due to contact with instruments or image noise. The performance of the algorithm was assessed on two datasets, representing ex-vivo organs and in-vivo surgical scenarios. Results show that the proposed framework, enhanced with MUpS, is capable of maintain high tracking performance for extended periods of time ( ≃ 4 min - containing the aforementioned events) with high precision (0.7) and recall (0.8) values, and with a recovery time after a failure between 1 and 8 frames in the worst case. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Haul truck tire dynamics due to tire condition

    NASA Astrophysics Data System (ADS)

    Vaghar Anzabi, R.; Nobes, D. S.; Lipsett, M. G.

    2012-05-01

    Pneumatic tires are costly components on large off-road haul trucks used in surface mining operations. Tires are prone to damage during operation, and these events can lead to injuries to personnel, loss of equipment, and reduced productivity. Damage rates have significant variability, due to operating conditions and a range of tire fault modes. Currently, monitoring of tire condition is done by physical inspection; and the mean time between inspections is often longer than the mean time between incipient failure and functional failure of the tire. Options for new condition monitoring methods include off-board thermal imaging and camera-based optical methods for detecting abnormal deformation and surface features, as well as on-board sensors to detect tire faults during vehicle operation. Physics-based modeling of tire dynamics can provide a good understanding of the tire behavior, and give insight into observability requirements for improved monitoring systems. This paper describes a model to simulate the dynamics of haul truck tires when a fault is present to determine the effects of physical parameter changes that relate to faults. To simulate the dynamics, a lumped mass 'quarter-vehicle' model has been used to determine the response of the system to a road profile when a failure changes the original properties of the tire. The result is a model of tire vertical displacement that can be used to detect a fault, which will be tested under field conditions in time-varying conditions.

  6. Systems and methods for detecting a failure event in a field programmable gate array

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2009-01-01

    An embodiment generally relates to a method of self-detecting an error in a field programmable gate array (FPGA). The method includes writing a signature value into a signature memory in the FPGA and determining a conclusion of a configuration refresh operation in the FPGA. The method also includes reading an outcome value from the signature memory.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  8. A Sensor Failure Simulator for Control System Reliability Studies

    NASA Technical Reports Server (NTRS)

    Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.

    1986-01-01

    A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.

  9. A sensor failure simulator for control system reliability studies

    NASA Astrophysics Data System (ADS)

    Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.

    A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.

  10. Architecture Specification for PAVE PILLAR Avionics

    DTIC Science & Technology

    1987-01-01

    PAVE PILLAR system is 99% fault detection. The percent fault detection is determined by the following computation. The number of verified failures de ...reconfiguration or reparameterization requi’red to support manual operations rests w’ith the Mission Supervi’sor. 3.3.8 corm~utr _ De in 3.3.8.1 Hither...1Order Ti.rie Su ’, .S.yStem The Operational Flight Program (OFP) will be de - veloped in accordance with the requirements of the Ada (ANSI/ MIL-STD

  11. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  12. Failure detection system design methodology. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.

    1980-01-01

    The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.

  13. A Fuzzy Reasoning Design for Fault Detection and Diagnosis of a Computer-Controlled System

    PubMed Central

    Ting, Y.; Lu, W.B.; Chen, C.H.; Wang, G.K.

    2008-01-01

    A Fuzzy Reasoning and Verification Petri Nets (FRVPNs) model is established for an error detection and diagnosis mechanism (EDDM) applied to a complex fault-tolerant PC-controlled system. The inference accuracy can be improved through the hierarchical design of a two-level fuzzy rule decision tree (FRDT) and a Petri nets (PNs) technique to transform the fuzzy rule into the FRVPNs model. Several simulation examples of the assumed failure events were carried out by using the FRVPNs and the Mamdani fuzzy method with MATLAB tools. The reasoning performance of the developed FRVPNs was verified by comparing the inference outcome to that of the Mamdani method. Both methods result in the same conclusions. Thus, the present study demonstratrates that the proposed FRVPNs model is able to achieve the purpose of reasoning, and furthermore, determining of the failure event of the monitored application program. PMID:19255619

  14. Phased Array Probe Optimization for the Inspection of Titanium Billets

    NASA Astrophysics Data System (ADS)

    Rasselkorde, E.; Cooper, I.; Wallace, P.; Lupien, V.

    2010-02-01

    The manufacturing process of titanium billets can produce multiple sub-surface defects that are particularly difficult to detect during the early stages of production. Failure to detect these defects can lead to subsequent in-service failure. A new and novel automated quality control system is being developed for the inspection of titanium billets destined for use in aerospace applications. The sensors will be deployed by an automated system to minimise the use of manual inspections, which should improve the quality and reliability of these critical inspections early on in the manufacturing process. This paper presents the first part of the work, which is the design and the simulation of the phased array ultrasonic inspection of the billets. A series of phased array transducers were designed to optimise the ultrasonic inspection of a ten inch diameter billet made from Titanium 6Al-4V. A comparison was performed between different probes including a 2D annular sectorial array.

  15. Software-Implemented Fault Tolerance in Communications Systems

    NASA Technical Reports Server (NTRS)

    Gantenbein, Rex E.

    1994-01-01

    Software-implemented fault tolerance (SIFT) is used in many computer-based command, control, and communications (C(3)) systems to provide the nearly continuous availability that they require. In the communications subsystem of Space Station Alpha, SIFT algorithms are used to detect and recover from failures in the data and command link between the Station and its ground support. The paper presents a review of these algorithms and discusses how such techniques can be applied to similar systems found in applications such as manufacturing control, military communications, and programmable devices such as pacemakers. With support from the Tracking and Communication Division of NASA's Johnson Space Center, researchers at the University of Wyoming are developing a testbed for evaluating the effectiveness of these algorithms prior to their deployment. This testbed will be capable of simulating a variety of C(3) system failures and recording the response of the Space Station SIFT algorithms to these failures. The design of this testbed and the applicability of the approach in other environments is described.

  16. Insulation Coordination and Failure Mitigation Concerns for Roust Dc Electrical Power Systems (Preprint)

    DTIC Science & Technology

    2014-05-01

    vulnerable to failure is air. This could be a discharge through an air medium or along an air/surface interface. Achieving robustness in dc power...sputtering” arcs) are discharges that are most commonly located in series with the intended load; the electrical impedance of the load limits the...particularly those used at voltages > 1000 V, is detection and measurement of partial- discharge (PD) activity. The presence of PD in a component typically

  17. Autonomous power management and distribution

    NASA Technical Reports Server (NTRS)

    Dolce, Jim; Kish, Jim

    1990-01-01

    The goal of the Autonomous Power System program is to develop and apply intelligent problem solving and control to the Space Station Freedom's electric power testbed being developed at NASA's Lewis Research Center. Objectives are to establish artificial intelligence technology paths, craft knowledge-based tools and products for power systems, and integrate knowledge-based and conventional controllers. This program represents a joint effort between the Space Station and Office of Aeronautics and Space Technology to develop and demonstrate space electric power automation technology capable of: (1) detection and classification of system operating status, (2) diagnosis of failure causes, and (3) cooperative problem solving for power scheduling and failure recovery. Program details, status, and plans will be presented.

  18. Old-and With Severe Heart Failure: Telemonitoring by Using Digital Pen Technology in Specialized Homecare: System Description, Implementation, and Early Results.

    PubMed

    Lind, Leili; Carlgren, Gunnar; Karlsson, Daniel

    2016-08-01

    Telehealth programs for heart failure have been studied using a variety of techniques. Because currently a majority of the elderly are nonusers of computers and Internet, we developed a home telehealth system based on digital pen technology. Fourteen patients (mean age, 84 years [median, 83 years]) with severe heart failure participated in a 13-month pilot study in specialized homecare. Participants communicated patient-reported outcome measures daily using the digital pen and health diary forms, submitting a total of 3 520 reports. The reports generated a total of 632 notifications when reports indicated worsening health. Healthcare professionals reviewed reports frequently, more than 4700 times throughout the study, and acted on the information provided. Patients answered questionnaires and were observed in their home environment when using the system. Results showed that the technology was accepted by participants: patients experienced an improved contact with clinicians; they felt more compliant with healthcare professionals' advice, and they felt more secure and more involved in their own care. Via the system, the healthcare professionals detected heart failure-related deteriorations at an earlier stage, and as a consequence, none of the patients were admitted into hospital care during the study.

  19. Prognostics for Microgrid Components

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav

    2012-01-01

    Prognostics is the science of predicting future performance and potential failures based on targeted condition monitoring. Moving away from the traditional reliability centric view, prognostics aims at detecting and quantifying the time to impending failures. This advance warning provides the opportunity to take actions that can preserve uptime, reduce cost of damage, or extend the life of the component. The talk will focus on the concepts and basics of prognostics from the viewpoint of condition-based systems health management. Differences with other techniques used in systems health management and philosophies of prognostics used in other domains will be shown. Examples relevant to micro grid systems and subsystems will be used to illustrate various types of prediction scenarios and the resources it take to set up a desired prognostic system. Specifically, the implementation results for power storage and power semiconductor components will demonstrate specific solution approaches of prognostics. The role of constituent elements of prognostics, such as model, prediction algorithms, failure threshold, run-to-failure data, requirements and specifications, and post-prognostic reasoning will be explained. A discussion on performance evaluation and performance metrics will conclude the technical discussion followed by general comments on open research problems and challenges in prognostics.

  20. Integrated Neural Flight and Propulsion Control System

    NASA Technical Reports Server (NTRS)

    Kaneshige, John; Gundy-Burlet, Karen; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes an integrated neural flight and propulsion control system. which uses a neural network based approach for applying alternate sources of control power in the presence of damage or failures. Under normal operating conditions, the system utilizes conventional flight control surfaces. Neural networks are used to provide consistent handling qualities across flight conditions and for different aircraft configurations. Under damage or failure conditions, the system may utilize unconventional flight control surface allocations, along with integrated propulsion control, when additional control power is necessary for achieving desired flight control performance. In this case, neural networks are used to adapt to changes in aircraft dynamics and control allocation schemes. Of significant importance here is the fact that this system can operate without emergency or backup flight control mode operations. An additional advantage is that this system can utilize, but does not require, fault detection and isolation information or explicit parameter identification. Piloted simulation studies were performed on a commercial transport aircraft simulator. Subjects included both NASA test pilots and commercial airline crews. Results demonstrate the potential for improving handing qualities and significantly increasing survivability rates under various simulated failure conditions.

  1. Spiral-Bevel-Gear Damage Detected Using Decision Fusion Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Handschuh, Robert F.

    2003-01-01

    Helicopter transmission integrity is critical to helicopter safety because helicopters depend on the power train for propulsion, lift, and flight maneuvering. To detect impending transmission failures, the ideal diagnostic tools used in the health-monitoring system would provide real-time health monitoring of the transmission, demonstrate a high level of reliable detection to minimize false alarms, and provide end users with clear information on the health of the system without requiring them to interpret large amounts of sensor data. A diagnostic tool for detecting damage to spiral bevel gears was developed. (Spiral bevel gears are used in helicopter transmissions to transfer power between nonparallel intersecting shafts.) Data fusion was used to integrate two different monitoring technologies, oil debris analysis and vibration, into a health-monitoring system for detecting surface fatigue pitting damage on the gears.

  2. Fault Detection and Diagnosis of Railway Point Machines by Sound Analysis

    PubMed Central

    Lee, Jonguk; Choi, Heesu; Park, Daihee; Chung, Yongwha; Kim, Hee-Young; Yoon, Sukhan

    2016-01-01

    Railway point devices act as actuators that provide different routes to trains by driving switchblades from the current position to the opposite one. Point failure can significantly affect railway operations, with potentially disastrous consequences. Therefore, early detection of anomalies is critical for monitoring and managing the condition of rail infrastructure. We present a data mining solution that utilizes audio data to efficiently detect and diagnose faults in railway condition monitoring systems. The system enables extracting mel-frequency cepstrum coefficients (MFCCs) from audio data with reduced feature dimensions using attribute subset selection, and employs support vector machines (SVMs) for early detection and classification of anomalies. Experimental results show that the system enables cost-effective detection and diagnosis of faults using a cheap microphone, with accuracy exceeding 94.1% whether used alone or in combination with other known methods. PMID:27092509

  3. Autonomous diagnostics and prognostics of signal and data distribution systems

    NASA Astrophysics Data System (ADS)

    Blemel, Kenneth G.

    2001-07-01

    Wiring is the nervous system of any complex system and is attached to or services nearly every subsystem. Damage to optical wiring systems can cause serious interruptions in communication, command and control systems. Electrical wiring faults and failures due to opens, shorts, and arcing probably result in adverse effects to the systems serviced by the wiring. Abnormalities in a system usually can be detected by monitoring some wiring parameter such as vibration, data activity or power consumption. This paper introduces the mapping of wiring to critical functions during system engineering to automatically define the Failure Modes Effects and Criticality Analysis. This mapping can be used to define the sensory processes needed to perform diagnostics during system engineering. This paper also explains the use of Operational Modes and Criticality Effects Analysis in the development of Sentient Wiring Systems as a means for diagnostic, prognostics and health management of wiring in aerospace and transportation systems.

  4. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  5. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  6. Train integrity detection risk analysis based on PRISM

    NASA Astrophysics Data System (ADS)

    Wen, Yuan

    2018-04-01

    GNSS based Train Integrity Monitoring System (TIMS) is an effective and low-cost detection scheme for train integrity detection. However, as an external auxiliary system of CTCS, GNSS may be influenced by external environments, such as uncertainty of wireless communication channels, which may lead to the failure of communication and positioning. In order to guarantee the reliability and safety of train operation, a risk analysis method of train integrity detection based on PRISM is proposed in this article. First, we analyze the risk factors (in GNSS communication process and the on-board communication process) and model them. Then, we evaluate the performance of the model in PRISM based on the field data. Finally, we discuss how these risk factors influence the train integrity detection process.

  7. 40 CFR 265.1053 - Standards: Compressors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  8. 40 CFR 264.1053 - Standards: Compressors.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  9. 40 CFR 264.1053 - Standards: Compressors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  10. 40 CFR 265.1053 - Standards: Compressors.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  11. 40 CFR 264.1053 - Standards: Compressors.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  12. 40 CFR 264.1053 - Standards: Compressors.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  13. 40 CFR 265.1053 - Standards: Compressors.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  14. 40 CFR 265.1053 - Standards: Compressors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  15. 40 CFR 264.1053 - Standards: Compressors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  16. 40 CFR 265.1053 - Standards: Compressors.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... equipped with a sensor that will detect failure of the seal system, barrier fluid system, or both. (e)(1) Each sensor as required in paragraph (d) of this section shall be checked daily or shall be equipped... compressor is located within the boundary of an unmanned plant site, in which case the sensor must be checked...

  17. Failure to detect seasonal changes in the song system nuclei of the black-capped chickadee (Poecile atricapillus).

    PubMed

    Smulders, T V; Lisi, M D; Tricomi, E; Otter, K A; Chruszcz, B; Ratcliffe, L M; DeVoogd, T J

    2006-08-01

    Most temperate songbird species sing seasonally, and the brain areas involved in producing song (the song system) vary in size alongside the changes in behavior. Black-capped chickadees (Poecile atricapillus) also sing seasonally, and we find that there are changes in the stereotypy and the length of the fee-bee song from the nonbreeding to the breeding season. Yet despite these changes, we fail to find any evidence of seasonal changes in the song system. The song system of males is larger than that of females, as is typical in songbirds, but the ratio between the sexes is small compared to other species. We suggest three hypotheses to explain our failure to find seasonal variation in the chickadee song system.

  18. JPRS Report Science & Technology Europe.

    DTIC Science & Technology

    1992-10-22

    Potatoes for More Sugar [Frankfurt/Main FRANKFURTER ALLEGEMEINE, 12 Aug 92] 26 COMPUTERS French Devise Operating System for Parallel, Failure...Tolerant and Real-Time Systems [Munich COMPUTER WOCHE, 5 Jun 92] 27 Germany Markets External Mass Memory for IBM-Compatible Parallel Interfaces...Infrared Detection System [Thierry Lucas; Paris L’USINE NOUVELLE TECHNOLOGIES, 16 Jul 92] 28 Streamlined ACE Fighter Airplane Approved [Paris AFP

  19. Towards Compensation Correctness in Interactive Systems

    NASA Astrophysics Data System (ADS)

    Vaz, Cátia; Ferreira, Carla

    One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.

  20. Aliasing Signal Separation of Superimposed Abrasive Debris Based on Degenerate Unmixing Estimation Technique

    PubMed Central

    Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei

    2018-01-01

    Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system’s lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system’s ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection. PMID:29543733

  1. Reconfigurable Control Design for the Full X-33 Flight Envelope

    NASA Technical Reports Server (NTRS)

    Cotting, M. Christopher; Burken, John J.

    2001-01-01

    A reconfigurable control law for the full X-33 flight envelope has been designed to accommodate a failed control surface and redistribute the control effort among the remaining working surfaces to retain satisfactory stability and performance. An offline nonlinear constrained optimization approach has been used for the X-33 reconfigurable control design method. Using a nonlinear, six-degree-of-freedom simulation, three example failures are evaluated: ascent with a left body flap jammed at maximum deflection; entry with a right inboard elevon jammed at maximum deflection; and landing with a left rudder jammed at maximum deflection. Failure detection and identification are accomplished in the actuator controller. Failure response comparisons between the nominal control mixer and the reconfigurable control subsystem (mixer) show the benefits of reconfiguration. Single aerosurface jamming failures are considered. The cases evaluated are representative of the study conducted to prove the adequate and safe performance of the reconfigurable control mixer throughout the full flight envelope. The X-33 flight control system incorporates reconfigurable flight control in the existing baseline system.

  2. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

    DOEpatents

    Gross, K.C.

    1994-07-26

    Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as background'' gases, further reducing the number of trial node combinations. Lastly, a fuzzy'' set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements. 14 figs.

  3. Investigation of Spiral Bevel Gear Condition Indicator Validation Via AC-29-2C Using Damage Progression Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2014-01-01

    This report documents the results of spiral bevel gear rig tests performed under a NASA Space Act Agreement with the Federal Aviation Administration (FAA) to support validation and demonstration of rotorcraft Health and Usage Monitoring Systems (HUMS) for maintenance credits via FAA Advisory Circular (AC) 29-2C, Section MG-15, Airworthiness Approval of Rotorcraft (HUMS) (Ref. 1). The overarching goal of this work was to determine a method to validate condition indicators in the lab that better represent their response to faults in the field. Using existing in-service helicopter HUMS flight data from faulted spiral bevel gears as a "Case Study," to better understand the differences between both systems, and the availability of the NASA Glenn Spiral Bevel Gear Fatigue Rig, a plan was put in place to design, fabricate and test comparable gear sets with comparable failure modes within the constraints of the test rig. The research objectives of the rig tests were to evaluate the capability of detecting gear surface pitting fatigue and other generated failure modes on spiral bevel gear teeth using gear condition indicators currently used in fielded HUMS. Nineteen final design gear sets were tested. Tables were generated for each test, summarizing the failure modes observed on the gear teeth for each test during each inspection interval and color coded based on damage mode per inspection photos. Gear condition indicators (CI) Figure of Merit 4 (FM4), Root Mean Square (RMS), +/- 1 Sideband Index (SI1) and +/- 3 Sideband Index (SI3) were plotted along with rig operational parameters. Statistical tables of the means and standard deviations were calculated within inspection intervals for each CI. As testing progressed, it became clear that certain condition indicators were more sensitive to a specific component and failure mode. These tests were clustered together for further analysis. Maintenance actions during testing were also documented. Correlation coefficients were calculated between each CI, component, damage state and torque. Results found test rig and gear design, type of fault and data acquisition can affect CI performance. Results found FM4, SI1 and SI3 can be used to detect macro pitting on two more gear or pinion teeth as long as it is detected prior to progressing to other components or transitioning to another failure mode. The sensitivity of RMS to system and operational conditions limit its reliability for systems that are not maintained at steady state. Failure modes that occurred due to scuffing or fretting were challenging to detect with current gear diagnostic tools, since the damage is distributed across all the gear and pinion teeth, smearing the impacting signatures typically used to differentiate between a healthy and damaged tooth contact. This is one of three final reports published on the results of this project. In the second report, damage modes experienced in the field will be mapped to the failure modes created in the test rig. The helicopter CI data will then be re-processed with the same analysis techniques applied to spiral bevel rig test data. In the third report, results from the rig and helicopter data analysis will be correlated. Observations, findings and lessons learned using sub-scale rig failure progression tests to validate helicopter gear condition indicators will be presented.

  4. Incipient failure detection of space shuttle main engine turbopump bearings using vibration envelope detection

    NASA Technical Reports Server (NTRS)

    Hopson, Charles B.

    1987-01-01

    The results of an analysis performed on seven successive Space Shuttle Main Engine (SSME) static test firings, utilizing envelope detection of external accelerometer data are discussed. The results clearly show the great potential for using envelope detection techniques in SSME incipient failure detection.

  5. Device for detecting imminent failure of high-dielectric stress capacitors. [Patent application

    DOEpatents

    McDuff, G.G.

    1980-11-05

    A device is described for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capacitor banks are utilized.

  6. Device for detecting imminent failure of high-dielectric stress capacitors

    DOEpatents

    McDuff, George G.

    1982-01-01

    A device for detecting imminent failure of a high-dielectric stress capacitor utilizing circuitry for detecting pulse width variations and pulse magnitude variations. Inexpensive microprocessor circuitry is utilized to make numerical calculations of digital data supplied by detection circuitry for comparison of pulse width data and magnitude data to determine if preselected ranges have been exceeded, thereby indicating imminent failure of a capacitor. Detection circuitry may be incorporated in transmission lines, pulse power circuitry, including laser pulse circuitry or any circuitry where capacitors or capactior banks are utilized.

  7. On the use of temperature for online condition monitoring of geared systems - A review

    NASA Astrophysics Data System (ADS)

    Touret, T.; Changenet, C.; Ville, F.; Lalmi, M.; Becquerelle, S.

    2018-02-01

    Gear unit condition monitoring is a key factor for mechanical system reliability management. When they are subjected to failure, gears and bearings may generate excessive vibration, debris and heat. Vibratory, acoustic or debris analyses are proven approaches to perform condition monitoring. An alternative to those methods is to use temperature as a condition indicator to detect gearbox failure. The review focuses on condition monitoring studies which use this thermal approach. According to the failure type and the measurement method, it exists a distinction whether it is contact (e.g. thermocouple) or non-contact temperature sensor (e.g. thermography). Capabilities and limitations of this approach are discussed. It is shown that the use of temperature for condition monitoring has a clear potential as an alternative to vibratory or acoustic health monitoring.

  8. An industrial information integration approach to in-orbit spacecraft

    NASA Astrophysics Data System (ADS)

    Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng

    2017-01-01

    To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.

  9. Gas centrifuge enrichment plants inspection frequency and remote monitoring issues for advanced safeguards implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyer, Brian David; Erpenbeck, Heather H; Miller, Karen A

    2010-09-13

    Current safeguards approaches used by the IAEA at gas centrifuge enrichment plants (GCEPs) need enhancement in order to verify declared low enriched uranium (LEU) production, detect undeclared LEU production and detect high enriched uranium (BEU) production with adequate probability using non destructive assay (NDA) techniques. At present inspectors use attended systems, systems needing the presence of an inspector for operation, during inspections to verify the mass and {sup 235}U enrichment of declared cylinders of uranium hexafluoride that are used in the process of enrichment at GCEPs. This paper contains an analysis of how possible improvements in unattended and attended NDAmore » systems including process monitoring and possible on-site destructive analysis (DA) of samples could reduce the uncertainty of the inspector's measurements providing more effective and efficient IAEA GCEPs safeguards. We have also studied a few advanced safeguards systems that could be assembled for unattended operation and the level of performance needed from these systems to provide more effective safeguards. The analysis also considers how short notice random inspections, unannounced inspections (UIs), and the concept of information-driven inspections can affect probability of detection of the diversion of nuclear material when coupled to new GCEPs safeguards regimes augmented with unattended systems. We also explore the effects of system failures and operator tampering on meeting safeguards goals for quantity and timeliness and the measures needed to recover from such failures and anomalies.« less

  10. System for detecting operating errors in a variable valve timing engine using pressure sensors

    DOEpatents

    Wiles, Matthew A.; Marriot, Craig D

    2013-07-02

    A method and control module includes a pressure sensor data comparison module that compares measured pressure volume signal segments to ideal pressure volume segments. A valve actuation hardware remedy module performs a hardware remedy in response to comparing the measured pressure volume signal segments to the ideal pressure volume segments when a valve actuation hardware failure is detected.

  11. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  12. Remote monitoring to Improve long-term prognosis in heart failure patients with implantable cardioverter-defibrillators.

    PubMed

    Ono, Maki; Varma, Niraj

    2017-05-01

    Strong evidence exists for the utility of remote monitoring in cardiac implantable electronic devices for early detection of arrhythmias and evaluation of system performance. The application of remote monitoring for the management of chronic disease such as heart failure has been an active area of research. Areas covered: This review aims to cover the latest evidence of remote monitoring of implantable cardiac defibrillators in terms of heart failure prognosis. This article also updates the current technology relating to the method and discusses key factors to be addressed in order to better use the approach. PubMed and internet searches were conducted to acquire most recent data and technology information. Expert commentary: Multiparameter monitoring with automatic transmission is useful for heart failure management. Improved adherence to remote monitoring and an optimal algorithm for transmitted alerts and their management are warranted in the management of heart failure.

  13. Heart-rate variability depression in porcine peritonitis-induced sepsis without organ failure.

    PubMed

    Jarkovska, Dagmar; Valesova, Lenka; Chvojka, Jiri; Benes, Jan; Danihel, Vojtech; Sviglerova, Jitka; Nalos, Lukas; Matejovic, Martin; Stengl, Milan

    2017-05-01

    Depression of heart-rate variability (HRV) in conditions of systemic inflammation has been shown in both patients and experimental animal models and HRV has been suggested as an early indicator of sepsis. The sensitivity of HRV-derived parameters to the severity of sepsis, however, remains unclear. In this study we modified the clinically relevant porcine model of peritonitis-induced sepsis in order to avoid the development of organ failure and to test the sensitivity of HRV to such non-severe conditions. In 11 anesthetized, mechanically ventilated and instrumented domestic pigs of both sexes, sepsis was induced by fecal peritonitis. The dose of feces was adjusted and antibiotic therapy was administered to avoid multiorgan failure. Experimental subjects were screened for 40 h from the induction of sepsis. In all septic animals, sepsis with hyperdynamic circulation and increased plasma levels of inflammatory mediators developed within 12 h from the induction of peritonitis. The sepsis did not progress to multiorgan failure and there was no spontaneous death during the experiment despite a modest requirement for vasopressor therapy in most animals (9/11). A pronounced reduction of HRV and elevation of heart rate developed quickly (within 5 h, time constant of 1.97 ± 0.80 h for HRV parameter TINN) upon the induction of sepsis and were maintained throughout the experiment. The frequency domain analysis revealed a decrease in the high-frequency component. The reduction of HRV parameters and elevation of heart rate preceded sepsis-associated hemodynamic changes by several hours (time constant of 11.28 ± 2.07 h for systemic vascular resistance decline). A pronounced and fast reduction of HRV occurred in the setting of a moderate experimental porcine sepsis without organ failure. Inhibition of parasympathetic cardiac signaling probably represents the main mechanism of HRV reduction in sepsis. The sensitivity of HRV to systemic inflammation may allow early detection of a moderate sepsis without organ failure. Impact statement A pronounced and fast reduction of heart-rate variability occurred in the setting of a moderate experimental porcine sepsis without organ failure. Dominant reduction of heart-rate variability was found in the high-frequency band indicating inhibition of parasympathetic cardiac signaling as the main mechanism of heart-rate variability reduction. The sensitivity of heart-rate variability to systemic inflammation may contribute to an early detection of moderate sepsis without organ failure.

  14. Flight experience with a fail-operational digital fly-by-wire control system

    NASA Technical Reports Server (NTRS)

    Brown, S. R.; Szalai, K. J.

    1977-01-01

    The NASA Dryden Flight Research Center is flight testing a triply redundant digital fly-by-wire (DFBW) control system installed in an F-8 aircraft. The full-time, full-authority system performs three-axis flight control computations, including stability and command augmentation, autopilot functions, failure detection and isolation, and self-test functions. Advanced control law experiments include an active flap mode for ride smoothing and maneuver drag reduction. This paper discusses research being conducted on computer synchronization, fault detection, fault isolation, and recovery from transient faults. The F-8 DFBW system has demonstrated immunity from nuisance fault declarations while quickly identifying truly faulty components.

  15. Health management and controls for earth to orbit propulsion systems

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.

    1992-01-01

    Fault detection and isolation for advanced rocket engine controllers are discussed focusing on advanced sensing systems and software which significantly improve component failure detection for engine safety and health management. Aerojet's Space Transportation Main Engine controller for the National Launch System is the state of the art in fault tolerant engine avionics. Health management systems provide high levels of automated fault coverage and significantly improve vehicle delivered reliability and lower preflight operations costs. Key technologies, including the sensor data validation algorithms and flight capable spectrometers, have been demonstrated in ground applications and are found to be suitable for bridging programs into flight applications.

  16. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    NASA Technical Reports Server (NTRS)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  17. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.

  18. Incipient Crack Detection in Composite Wind Turbine Blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Stuart G.; Choi, Mijin; Jeong, Hyomi

    2012-08-28

    This paper presents some analysis results for incipient crack detection in a 9-meter CX-100 wind turbine blade that underwent fatigue loading to failure. The blade was manufactured to standard specifications, and it underwent harmonic excitation at its first resonance using a hydraulically-actuated excitation system until reaching catastrophic failure. This work investigates the ability of an ultrasonic guided wave approach to detect incipient damage prior to the surfacing of a visible, catastrophic crack. The blade was instrumented with piezoelectric transducers, which were used in an active, pitchcatch mode with guided waves over a range of excitation frequencies. The performance results inmore » detecting incipient crack formation in the fiberglass skin of the blade is assessed over the range of frequencies in order to determine the point at which the incipient crack became detectable. Higher excitation frequencies provide consistent results for paths along the rotor blade's carbon fiber spar cap, but performance falls off with increasing excitation frequencies for paths off of the spar cap. Lower excitation frequencies provide more consistent performance across all sensor paths.« less

  19. Dynamic modelling and estimation of the error due to asynchronism in a redundant asynchronous multiprocessor system

    NASA Technical Reports Server (NTRS)

    Huynh, Loc C.; Duval, R. W.

    1986-01-01

    The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.

  20. 46 CFR 161.002-10 - Automatic fire detecting system control unit.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... that part of the supply circuit on the load side of the battery transfer switch and fuses. On a system supplied by a branch circuit the “normal source” shall be construed to mean the load side of any... fire alarm shall be electrically supervised. (d) Power failure alarms—(1) Loss of potential. The loss...

  1. Inferring Gear Damage from Oil-Debris and Vibration Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula

    2006-01-01

    A system for real-time detection of surface-fatigue-pitting damage to gears for use in a helicopter transmission is based on fuzzy-logic used to fuse data from sensors that measure oil-borne debris, referred to as "oil debris" in the article, and vibration signatures. A system to detect helicopter-transmission gear damage is beneficial because the power train of a helicopter is essential for propulsion, lift, and maneuvering, hence, the integrity of the transmission is critical to helicopter safety. To enable detection of an impending transmission failure, an ideal diagnostic system should provide real-time monitoring of the "health" of the transmission, be capable of a high level of reliable detection (with minimization of false alarms), and provide human users with clear information on the health of the system without making it necessary for them to interpret large amounts of sensor data.

  2. The Effect of SnCl2/AmF Pretreatment on Short- and Long-Term Bond Strength to Eroded Dentin

    PubMed Central

    Zumstein, Katrin; Peutzfeldt, Anne; Lussi, Adrian

    2018-01-01

    This study investigated the effect of SnCl2/AmF pretreatment on short- and long-term bond strength of resin composite to eroded dentin mediated by two self-etch, MDP-containing adhesive systems. 184 dentin specimens were produced from extracted human molars. Half the specimens (n = 92) were artificially eroded, and half were left untreated. For both substrates, half the specimens were pretreated with SnCl2/AmF, and half were left untreated. The specimens were treated with Clearfil SE Bond or Scotchbond Universal prior to application of resin composite. Microtensile bond strength (μTBS) was measured after 24 h or 1 year. Failure mode was detected and EDX was performed. μTBS results were statistically analyzed (α = 0.05). μTBS was significantly influenced by the dentin substrate (eroded < noneroded dentin) and storage time (24 h > 1 year; p < 0.0001) but not by pretreatment with SnCl2/AmF or adhesive system. The predominant failure mode was adhesive failure at the dentin-adhesive interface. The content of Sn was generally below detection limit. Pretreatment with SnCl2/AmF did not influence short- and long-term bond strength to eroded dentin. Bond strength was reduced after storage for one year, was lower to eroded dentin than to noneroded dentin, and was similar for the two adhesive systems.

  3. Fault tolerant system with imperfect coverage, reboot and server vacation

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Meena, Rakesh Kumar

    2017-06-01

    This study is concerned with the performance modeling of a fault tolerant system consisting of operating units supported by a combination of warm and cold spares. The on-line as well as warm standby units are subject to failures and are send for the repair to a repair facility having single repairman which is prone to failure. If the failed unit is not detected, the system enters into an unsafe state from which it is cleared by the reboot and recovery action. The server is allowed to go for vacation if there is no failed unit present in the system. Markov model is developed to obtain the transient probabilities associated with the system states. Runge-Kutta method is used to evaluate the system state probabilities and queueing measures. To explore the sensitivity and cost associated with the system, numerical simulation is conducted.

  4. Real-Time Diagnosis of Faults Using a Bank of Kalman Filters

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2006-01-01

    A new robust method of automated real-time diagnosis of faults in an aircraft engine or a similar complex system involves the use of a bank of Kalman filters. In order to be highly reliable, a diagnostic system must be designed to account for the numerous failure conditions that an aircraft engine may encounter in operation. The method achieves this objective though the utilization of multiple Kalman filters, each of which is uniquely designed based on a specific failure hypothesis. A fault-detection-and-isolation (FDI) system, developed based on this method, is able to isolate faults in sensors and actuators while detecting component faults (abrupt degradation in engine component performance). By affording a capability for real-time identification of minor faults before they grow into major ones, the method promises to enhance safety and reduce operating costs. The robustness of this method is further enhanced by incorporating information regarding the aging condition of an engine. In general, real-time fault diagnostic methods use the nominal performance of a "healthy" new engine as a reference condition in the diagnostic process. Such an approach does not account for gradual changes in performance associated with aging of an otherwise healthy engine. By incorporating information on gradual, aging-related changes, the new method makes it possible to retain at least some of the sensitivity and accuracy needed to detect incipient faults while preventing false alarms that could result from erroneous interpretation of symptoms of aging as symptoms of failures. The figure schematically depicts an FDI system according to the new method. The FDI system is integrated with an engine, from which it accepts two sets of input signals: sensor readings and actuator commands. Two main parts of the FDI system are a bank of Kalman filters and a subsystem that implements FDI decision rules. Each Kalman filter is designed to detect a specific sensor or actuator fault. When a sensor or actuator fault occurs, large estimation errors are generated by all filters except the one using the correct hypothesis. By monitoring the residual output of each filter, the specific fault that has occurred can be detected and isolated on the basis of the decision rules. A set of parameters that indicate the performance of the engine components is estimated by the "correct" Kalman filter for use in detecting component faults. To reduce the loss of diagnostic accuracy and sensitivity in the face of aging, the FDI system accepts information from a steady-state-condition-monitoring system. This information is used to update the Kalman filters and a data bank of trim values representative of the current aging condition.

  5. An Integrated Fault Tolerant Robotic Controller System for High Reliability and Safety

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Tso, Kam S.; Hecht, Myron

    1994-01-01

    This paper describes the concepts and features of a fault-tolerant intelligent robotic control system being developed for applications that require high dependability (reliability, availability, and safety). The system consists of two major elements: a fault-tolerant controller and an operator workstation. The fault-tolerant controller uses a strategy which allows for detection and recovery of hardware, operating system, and application software failures.The fault-tolerant controller can be used by itself in a wide variety of applications in industry, process control, and communications. The controller in combination with the operator workstation can be applied to robotic applications such as spaceborne extravehicular activities, hazardous materials handling, inspection and maintenance of high value items (e.g., space vehicles, reactor internals, or aircraft), medicine, and other tasks where a robot system failure poses a significant risk to life or property.

  6. Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less

  7. A tri-fold hybrid classification approach for diagnostics with unexampled faulty states

    NASA Astrophysics Data System (ADS)

    Tamilselvan, Prasanna; Wang, Pingfeng

    2015-01-01

    System health diagnostics provides diversified benefits such as improved safety, improved reliability and reduced costs for the operation and maintenance of engineered systems. Successful health diagnostics requires the knowledge of system failures. However, with an increasing system complexity, it is extraordinarily difficult to have a well-tested system so that all potential faulty states can be realized and studied at product testing stage. Thus, real time health diagnostics requires automatic detection of unexampled system faulty states based upon sensory data to avoid sudden catastrophic system failures. This paper presents a trifold hybrid classification (THC) approach for structural health diagnosis with unexampled health states (UHS), which comprises of preliminary UHS identification using a new thresholded Mahalanobis distance (TMD) classifier, UHS diagnostics using a two-class support vector machine (SVM) classifier, and exampled health states diagnostics using a multi-class SVM classifier. The proposed THC approach, which takes the advantages of both TMD and SVM-based classification techniques, is able to identify and isolate the unexampled faulty states through interactively detecting the deviation of sensory data from the exampled health states and forming new ones autonomously. The proposed THC approach is further extended to a generic framework for health diagnostics problems with unexampled faulty states and demonstrated with health diagnostics case studies for power transformers and rolling bearings.

  8. Partial Discharge Monitoring in Power Transformers Using Low-Cost Piezoelectric Sensors

    PubMed Central

    Castro, Bruno; Clerice, Guilherme; Ramos, Caio; Andreoli, André; Baptista, Fabricio; Campos, Fernando; Ulson, José

    2016-01-01

    Power transformers are crucial in an electric power system. Failures in transformers can affect the quality and cause interruptions in the power supply. Partial discharges are a phenomenon that can cause failures in the transformers if not properly monitored. Typically, the monitoring requires high-cost corrective maintenance or even interruptions of the power system. Therefore, the development of online non-invasive monitoring systems to detect partial discharges in power transformers has great relevance since it can reduce significant maintenance costs. Although commercial acoustic emission sensors have been used to monitor partial discharges in power transformers, they still represent a significant cost. In order to overcome this drawback, this paper presents a study of the feasibility of low-cost piezoelectric sensors to identify partial discharges in mineral insulating oil of power transformers. The analysis of the feasibility of the proposed low-cost sensor is performed by its comparison with a commercial acoustic emission sensor commonly used to detect partial discharges. The comparison between the responses in the time and frequency domain of both sensors was carried out and the experimental results indicate that the proposed piezoelectric sensors have great potential in the detection of acoustic waves generated by partial discharges in insulation oil, contributing for the popularization of this noninvasive technique. PMID:27517931

  9. Partial Discharge Monitoring in Power Transformers Using Low-Cost Piezoelectric Sensors.

    PubMed

    Castro, Bruno; Clerice, Guilherme; Ramos, Caio; Andreoli, André; Baptista, Fabricio; Campos, Fernando; Ulson, José

    2016-08-10

    Power transformers are crucial in an electric power system. Failures in transformers can affect the quality and cause interruptions in the power supply. Partial discharges are a phenomenon that can cause failures in the transformers if not properly monitored. Typically, the monitoring requires high-cost corrective maintenance or even interruptions of the power system. Therefore, the development of online non-invasive monitoring systems to detect partial discharges in power transformers has great relevance since it can reduce significant maintenance costs. Although commercial acoustic emission sensors have been used to monitor partial discharges in power transformers, they still represent a significant cost. In order to overcome this drawback, this paper presents a study of the feasibility of low-cost piezoelectric sensors to identify partial discharges in mineral insulating oil of power transformers. The analysis of the feasibility of the proposed low-cost sensor is performed by its comparison with a commercial acoustic emission sensor commonly used to detect partial discharges. The comparison between the responses in the time and frequency domain of both sensors was carried out and the experimental results indicate that the proposed piezoelectric sensors have great potential in the detection of acoustic waves generated by partial discharges in insulation oil, contributing for the popularization of this noninvasive technique.

  10. Remote Structural Health Monitoring and Advanced Prognostics of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Brown; Bernard Laskowski

    The prospect of substantial investment in wind energy generation represents a significant capital investment strategy. In order to maximize the life-cycle of wind turbines, associated rotors, gears, and structural towers, a capability to detect and predict (prognostics) the onset of mechanical faults at a sufficiently early stage for maintenance actions to be planned would significantly reduce both maintenance and operational costs. Advancement towards this effort has been made through the development of anomaly detection, fault detection and fault diagnosis routines to identify selected fault modes of a wind turbine based on available sensor data preceding an unscheduled emergency shutdown. Themore » anomaly detection approach employs spectral techniques to find an approximation of the data using a combination of attributes that capture the bulk of variability in the data. Fault detection and diagnosis (FDD) is performed using a neural network-based classifier trained from baseline and fault data recorded during known failure conditions. The approach has been evaluated for known baseline conditions and three selected failure modes: pitch rate failure, low oil pressure failure and a gearbox gear-tooth failure. Experimental results demonstrate the approach can distinguish between these failure modes and normal baseline behavior within a specified statistical accuracy.« less

  11. Fault detection and analysis in nuclear research facility using artificial intelligence methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghazali, Abu Bakar, E-mail: Abakar@uniten.edu.my; Ibrahim, Maslina Mohd

    In this article, an online detection of transducer and actuator condition is discussed. A case study is on the reading of area radiation monitor (ARM) installed at the chimney of PUSPATI TRIGA nuclear reactor building, located at Bangi, Malaysia. There are at least five categories of abnormal ARM reading that could happen during the transducer failure, namely either the reading becomes very high, or very low/ zero, or with high fluctuation and noise. Moreover, the reading may be significantly higher or significantly lower as compared to the normal reading. An artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS)more » are good methods for modeling this plant dynamics. The failure of equipment is based on ARM reading so it is then to compare with the estimated ARM data from ANN/ ANFIS function. The failure categories in either ‘yes’ or ‘no’ state are obtained from a comparison between the actual online data and the estimated output from ANN/ ANFIS function. It is found that this system design can correctly report the condition of ARM equipment in a simulated environment and later be implemented for online monitoring. This approach can also be extended to other transducers, such as the temperature profile of reactor core and also to include other critical actuator conditions such as the valves and pumps in the reactor facility provided that the failure symptoms are clearly defined.« less

  12. Disease management: remote monitoring in heart failure patients with implantable defibrillators, resynchronization devices, and haemodynamic monitors.

    PubMed

    Abraham, William T

    2013-06-01

    Heart failure represents a major public health concern, associated with high rates of morbidity and mortality. A particular focus of contemporary heart failure management is reduction of hospital admission and readmission rates. While optimal medical therapy favourably impacts the natural history of the disease, devices such as cardiac resynchronization therapy devices and implantable cardioverter defibrillators have added incremental value in improving heart failure outcomes. These devices also enable remote patient monitoring via device-based diagnostics. Device-based measurement of physiological parameters, such as intrathoracic impedance and heart rate variability, provide a means to assess risk of worsening heart failure and the possibility of future hospitalization. Beyond this capability, implantable haemodynamic monitors have the potential to direct day-to-day management of heart failure patients to significantly reduce hospitalization rates. The use of a pulmonary artery pressure measurement system has been shown to significantly reduce the risk of heart failure hospitalization in a large randomized controlled study, the CardioMEMS Heart Sensor Allows Monitoring of Pressure to Improve Outcomes in NYHA Class III Heart Failure Patients (CHAMPION) trial. Observations from a pilot study also support the potential use of a left atrial pressure monitoring system and physician-directed patient self-management paradigm; these observations are under further investigation in the ongoing LAPTOP-HF trial. All these devices depend upon high-intensity remote monitoring for successful detection of parameter deviations and for directing and following therapy.

  13. Intelligent transient transitions detection of LRE test bed

    NASA Astrophysics Data System (ADS)

    Zhu, Fengyu; Shen, Zhengguang; Wang, Qi

    2013-01-01

    Health Monitoring Systems is an implementation of monitoring strategies for complex systems whereby avoiding catastrophic failure, extending life and leading to improved asset management. A Health Monitoring Systems generally encompasses intelligence at many levels and sub-systems including sensors, actuators, devices, etc. In this paper, a smart sensor is studied, which is use to detect transient transitions of liquid-propellant rocket engines test bed. In consideration of dramatic changes of variable condition, wavelet decomposition is used to work real time in areas. Contrast to traditional Fourier transform method, the major advantage of adding wavelet analysis is the ability to detect transient transitions as well as obtaining the frequency content using a much smaller data set. Historically, transient transitions were only detected by offline analysis of the data. The methods proposed in this paper provide an opportunity to detect transient transitions automatically as well as many additional data anomalies, and provide improved data-correction and sensor health diagnostic abilities. The developed algorithms have been tested on actual rocket test data.

  14. Nondestructive Structural Damage Detection in Flexible Space Structures Using Vibration Characterization

    NASA Technical Reports Server (NTRS)

    Ricles, James M.

    1991-01-01

    Spacecraft are susceptible to structural damage over their operating life from impact, environmental loads, and fatigue. Structural damage that is not detected and not corrected may potentially cause more damage and eventually catastrophic structural failure. NASA's current fleet of reusable spacecraft, namely the Space Shuttle, has been flown on several missions. In addition, configurations of future NASA space structures, e.g. Space Station Freedom, are larger and more complex than current structures, making them more susceptible to damage as well as being more difficult to inspect. Consequently, a reliable structural damage detection capability is essential to maintain the flight safety of these structures. Visual inspections alone can not locate impending material failure (fatigue cracks, yielding); it can only observe post-failure situations. An alternative approach is to develop an inspection and monitoring system based on vibration characterization that assesses the integrity of structural and mechanical components. A methodology for detecting structural damage is presented. This methodology is based on utilizing modal test data in conjunction with a correlated analytical model of the structure to: (1) identify the structural dynamic characteristics (resonant frequencies and mode shapes) from measurements of ambient motions and/or force excitation; (2) calculate modal residual force vectors to identify the location of structural damage; and (3) conduct a weighted sensitivity analysis in order to assess the extent of mass and stiffness variations, where structural damage is characterized by stiffness reductions. The approach is unique from other existing approaches in that varying system mass and stiffness, mass center locations, the perturbation of both the natural frequencies and mode shapes, and statistical confidence factors for structural parameters and experimental instrumentation are all accounted for directly.

  15. An autonomous recovery mechanism against optical distribution network failures in EPON

    NASA Astrophysics Data System (ADS)

    Liem, Andrew Tanny; Hwang, I.-Shyan; Nikoukar, AliAkbar

    2014-10-01

    Ethernet Passive Optical Network (EPON) is chosen for servicing diverse applications with higher bandwidth and Quality-of-Service (QoS), starting from Fiber-To-The-Home (FTTH), FTTB (business/building) and FTTO (office). Typically, a single OLT can provide services to both residential and business customers on the same Optical Line Terminal (OLT) port; thus, any failures in the system will cause a great loss for both network operators and customers. Network operators are looking for low-cost and high service availability mechanisms that focus on the failures that occur within the drop fiber section because the majority of faults are in this particular section. Therefore, in this paper, we propose an autonomous recovery mechanism that provides protection and recovery against Drop Distribution Fiber (DDF) link faults or transceiver failure at the ONU(s) in EPON systems. In the proposed mechanism, the ONU can automatically detect any signal anomalies in the physical layer or transceiver failure, switching the working line to the protection line and sending the critical event alarm to OLT via its neighbor. Each ONU has a protection line, which is connected to the nearest neighbor ONU, and therefore, when failure occurs, the ONU can still transmit and receive data via the neighbor ONU. Lastly, the Fault Dynamic Bandwidth Allocation for recovery mechanism is presented. Simulation results show that our proposed autonomous recovery mechanism is able to maintain the overall QoS performance in terms of mean packet delay, system throughput, packet loss and EF jitter.

  16. Algorithmic network monitoring for a modern water utility: a case study in Jerusalem.

    PubMed

    Armon, A; Gutner, S; Rosenberg, A; Scolnicov, H

    2011-01-01

    We report on the design, deployment, and use of TaKaDu, a real-time algorithmic Water Infrastructure Monitoring solution, with a strong focus on water loss reduction and control. TaKaDu is provided as a commercial service to several customers worldwide. It has been in use at HaGihon, the Jerusalem utility, since mid 2009. Water utilities collect considerable real-time data from their networks, e.g. by means of a SCADA system and sensors measuring flow, pressure, and other data. We discuss how an algorithmic statistical solution analyses this wealth of raw data, flexibly using many types of input and picking out and reporting significant events and failures in the network. Of particular interest to most water utilities is the early detection capability for invisible leaks, also a means for preventing large visible bursts. The system also detects sensor and SCADA failures, various water quality issues, DMA boundary breaches, unrecorded or unintended network changes (like a valve or pump state change), and other events, including types unforeseen during system design. We discuss results from use at HaGihon, showing clear operational value.

  17. EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2016-05-01

    A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.

  18. Making intelligent systems team players. A guide to developing intelligent monitoring systems

    NASA Technical Reports Server (NTRS)

    Land, Sherry A.; Malin, Jane T.; Thronesberry, Carroll; Schreckenghost, Debra L.

    1995-01-01

    This reference guide for developers of intelligent monitoring systems is based on lessons learned by developers of the DEcision Support SYstem (DESSY), an expert system that monitors Space Shuttle telemetry data in real time. DESSY makes inferences about commands, state transitions, and simple failures. It performs failure detection rather than in-depth failure diagnostics. A listing of rules from DESSY and cue cards from DESSY subsystems are included to give the development community a better understanding of the selected model system. The G-2 programming tool used in developing DESSY provides an object-oriented, rule-based environment, but many of the principles in use here can be applied to any type of monitoring intelligent system. The step-by-step instructions and examples given for each stage of development are in G-2, but can be used with other development tools. This guide first defines the authors' concept of real-time monitoring systems, then tells prospective developers how to determine system requirements, how to build the system through a combined design/development process, and how to solve problems involved in working with real-time data. It explains the relationships among operational prototyping, software evolution, and the user interface. It also explains methods of testing, verification, and validation. It includes suggestions for preparing reference documentation and training users.

  19. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.

  20. Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy.

    PubMed

    Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E

    2015-06-01

    Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.

  1. Comprehensive analysis of cochlear implant failure: usefulness of clinical symptom-based algorithm combined with in situ integrity testing.

    PubMed

    Yamazaki, Hiroshi; O'Leary, Stephen; Moran, Michelle; Briggs, Robert

    2014-04-01

    Accurate diagnosis of cochlear implant failures is important for management; however, appropriate strategies to assess possible device failures are not always clear. The purpose of this study is to understand correlation between causes of device failure and the presenting clinical symptoms as well as results of in situ integrity testing and to propose effective strategies for diagnosis of device failure. Retrospective case review. Cochlear implant center at a tertiary referral hospital. Twenty-seven cases with suspected device failure of Cochlear Nucleus systems (excluding CI512 failures) on the basis of deterioration in auditory perception from January 2000 to September 2012 in the Melbourne cochlear implant clinic. Clinical presentations and types of abnormalities on in situ integrity testing were compared with modes of device failure detected by returned device analysis. Sudden deterioration in auditory perception was always observed in cases with "critical damage": either fracture of the integrated circuit or most or all of the electrode wires. Subacute or gradually progressive deterioration in auditory perception was significantly associated with a more limited number of broken electrode wires. Cochlear implant mediated auditory and nonauditory symptoms were significantly associated with an insulation problem. An algorithm based on the time course of deterioration in auditory perception and cochlear implant-mediated auditory and nonauditory symptoms was developed on the basis of these retrospective analyses, to help predict the mode of device failure. In situ integrity testing, which included close monitoring of device function in routine programming sessions as well as repeating the manufacturer's integrity test battery, was sensitive enough to detect malfunction in all suspected device failures, and each mode of device failure showed a characteristic abnormality on in situ integrity testing. Our clinical manifestation-based algorithm combined with in situ integrity testing may be useful for accurate diagnosis and appropriate management of device failure. Close monitoring of device function in routine programming sessions as well as repeating the manufacturer's integrity test battery is important if the initial in situ integrity testing is inconclusive because objective evidence of failure in the implanted device is essential to recommend explantation/reimplantation.

  2. Assessment Study of the State of the Art in Adaptive Control and its Applications to Aircraft Control

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard

    1998-01-01

    Many papers relevant to reconfigurable flight control have appeared over the past fifteen years. In general these have consisted of theoretical issues, simulation experiments, and in some cases, actual flight tests. Results indicate that reconfiguration of flight controls is certainly feasible for a wide class of failures. However many of the proposed procedures although quite attractive, need further analytical and experimental studies for meaningful validation. Many procedures assume the availability of failure detection and identification logic that will supply adequately fast, the dynamics corresponding to the failed aircraft. This in general implies that the failure detection and fault identification logic must have access to all possible anticipated faults and the corresponding dynamical equations of motion. Unless some sort of explicit on line parameter identification is included, the computational demands could possibly be too excessive. This suggests the need for some form of adaptive control, either by itself as the prime procedure for control reconfiguration or in conjunction with the failure detection logic. If explicit or indirect adaptive control is used, then it is important that the identified models be such that the corresponding computed controls deliver adequate performance to the actual aircraft. Unknown changes in trim should be modelled, and parameter identification needs to be adequately insensitive to noise and at the same time capable of tracking abrupt changes. If however, both failure detection and system parameter identification turn out to be too time consuming in an emergency situation, then the concepts of direct adaptive control should be considered. If direct model reference adaptive control is to be used (on a linear model) with stability assurances, then a positive real or passivity condition needs to be satisfied for all possible configurations. This condition is often satisfied with a feedforward compensator around the plant. This compensator must be robustly designed such that the compensated plant satisfies the required positive real conditions over all expected parameter values. Furthermore, with the feedforward only around the plant, a nonzero (but bounded error) will exist in steady state between the plant and model outputs. This error can be removed by placing the compensator also in the reference model. Design of such a compensator should not be too difficult a problem since for flight control it is generally possible to feedback all the system states.

  3. Procedure and information displays in advanced nuclear control rooms: experimental evaluation of an integrated design.

    PubMed

    Chen, Yue; Gao, Qin; Song, Fei; Li, Zhizhong; Wang, Yufan

    2017-08-01

    In the main control rooms of nuclear power plants, operators frequently have to switch between procedure displays and system information displays. In this study, we proposed an operation-unit-based integrated design, which combines the two displays to facilitate the synthesis of information. We grouped actions that complete a single goal into operation units and showed these operation units on the displays of system states. In addition, we used different levels of visual salience to highlight the current unit and provided a list of execution history records. A laboratory experiment, with 42 students performing a simulated procedure to deal with unexpected high pressuriser level, was conducted to compare this design against an action-based integrated design and the existing separated-displays design. The results indicate that our operation-unit-based integrated design yields the best performance in terms of time and completion rate and helped more participants to detect unexpected system failures. Practitioner Summary: In current nuclear control rooms, operators frequently have to switch between procedure and system information displays. We developed an integrated design that incorporates procedure information into system displays. A laboratory study showed that the proposed design significantly improved participants' performance and increased the probability of detecting unexpected system failures.

  4. Using Wireless Sensor Networks and Trains as Data Mules to Monitor Slab Track Infrastructures.

    PubMed

    Cañete, Eduardo; Chen, Jaime; Díaz, Manuel; Llopis, Luis; Reyna, Ana; Rubio, Bartolomé

    2015-06-26

    Recently, slab track systems have arisen as a safer and more sustainable option for high speed railway infrastructures, compared to traditional ballasted tracks. Integrating Wireless Sensor Networks within these infrastructures can provide structural health related data that can be used to evaluate their degradation and to not only detect failures but also to predict them. The design of such systems has to deal with a scenario of large areas with inaccessible zones, where neither Internet coverage nor electricity supply is guaranteed. In this paper we propose a monitoring system for slab track systems that measures vibrations and displacements in the track. Collected data is transmitted to passing trains, which are used as data mules to upload the information to a remote control center. On arrival at the station, the data is stored in a database, which is queried by an application in order to detect and predict failures. In this paper, different communication architectures are designed and tested to select the most suitable system meeting such requirements as efficiency, low cost and data accuracy. In addition, to ensure communication between the sensing devices and the train, the communication system must take into account parameters such as train speed, antenna coverage, band and frequency.

  5. Using Wireless Sensor Networks and Trains as Data Mules to Monitor Slab Track Infrastructures

    PubMed Central

    Cañete, Eduardo; Chen, Jaime; Díaz, Manuel; Llopis, Luis; Reyna, Ana; Rubio, Bartolomé

    2015-01-01

    Recently, slab track systems have arisen as a safer and more sustainable option for high speed railway infrastructures, compared to traditional ballasted tracks. Integrating Wireless Sensor Networks within these infrastructures can provide structural health related data that can be used to evaluate their degradation and to not only detect failures but also to predict them. The design of such systems has to deal with a scenario of large areas with inaccessible zones, where neither Internet coverage nor electricity supply is guaranteed. In this paper we propose a monitoring system for slab track systems that measures vibrations and displacements in the track. Collected data is transmitted to passing trains, which are used as data mules to upload the information to a remote control center. On arrival at the station, the data is stored in a database, which is queried by an application in order to detect and predict failures. In this paper, different communication architectures are designed and tested to select the most suitable system meeting such requirements as efficiency, low cost and data accuracy. In addition, to ensure communication between the sensing devices and the train, the communication system must take into account parameters such as train speed, antenna coverage, band and frequency. PMID:26131668

  6. 15 CFR 700.71 - Audits and investigations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... interviews and a systems evaluation to detect problems or failures in the implementation of this regulation... Commerce to interview the person's employees or agents, to inspect books, records, documents, other... them, and to inspect a person's property when such interviews and inspections are necessary or...

  7. Virtual Sensor for Failure Detection, Identification and Recovery in the Transition Phase of a Morphing Aircraft

    PubMed Central

    Heredia, Guillermo; Ollero, Aníbal

    2010-01-01

    The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations. PMID:22294922

  8. Virtual sensor for failure detection, identification and recovery in the transition phase of a morphing aircraft.

    PubMed

    Heredia, Guillermo; Ollero, Aníbal

    2010-01-01

    The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations.

  9. Managing Network Partitions in Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif

    Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.

  10. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  11. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. DC-to-AC inverter ratio failure detector

    NASA Technical Reports Server (NTRS)

    Ebersole, T. J.; Andrews, R. E.

    1975-01-01

    Failure detection technique is based upon input-output ratios, which is independent of inverter loading. Since inverter has fixed relationship between V-in/V-out and I-in/I-out, failure detection criteria are based on this ratio, which is simply inverter transformer turns ratio, K, equal to primary turns divided by secondary turns.

  13. Quantifying Electromigration Processes in Sn-0.7Cu Solder with Lab-Scale X-Ray Computed Micro-Tomography

    NASA Astrophysics Data System (ADS)

    Mertens, James Charles Edwin

    For decades, microelectronics manufacturing has been concerned with failures related to electromigration phenomena in conductors experiencing high current densities. The influence of interconnect microstructure on device failures related to electromigration in BGA and flip chip solder interconnects has become a significant interest with reduced individual solder interconnect volumes. A survey indicates that x-ray computed micro-tomography (muXCT) is an emerging, novel means for characterizing the microstructures' role in governing electromigration failures. This work details the design and construction of a lab-scale muXCT system to characterize electromigration in the Sn-0.7Cu lead-free solder system by leveraging in situ imaging. In order to enhance the attenuation contrast observed in multi-phase material systems, a modeling approach has been developed to predict settings for the controllable imaging parameters which yield relatively high detection rates over the range of x-ray energies for which maximum attenuation contrast is expected in the polychromatic x-ray imaging system. In order to develop this predictive tool, a model has been constructed for the Bremsstrahlung spectrum of an x-ray tube, and calculations for the detector's efficiency over the relevant range of x-ray energies have been made, and the product of emitted and detected spectra has been used to calculate the effective x-ray imaging spectrum. An approach has also been established for filtering 'zinger' noise in x-ray radiographs, which has proven problematic at high x-ray energies used for solder imaging. The performance of this filter has been compared with a known existing method and the results indicate a significant increase in the accuracy of zinger filtered radiographs. The obtained results indicate the conception of a powerful means for the study of failure causing processes in solder systems used as interconnects in microelectronic packaging devices. These results include the volumetric quantification of parameters which are indicative of both electromigration tolerance of solders and the dominant mechanisms for atomic migration in response to current stressing. This work is aimed to further the community's understanding of failure-causing electromigration processes in industrially relevant material systems for microelectronic interconnect applications and to advance the capability of available characterization techniques for their interrogation.

  14. Sensor Data Qualification System (SDQS) Implementation Study

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Melcher, Kevin; Fulton, Christopher; Maul, William

    2009-01-01

    The Sensor Data Qualification System (SDQS) is being developed to provide a sensor fault detection capability for NASA s next-generation launch vehicles. In addition to traditional data qualification techniques (such as limit checks, rate-of-change checks and hardware redundancy checks), SDQS can provide augmented capability through additional techniques that exploit analytical redundancy relationships to enable faster and more sensitive sensor fault detection. This paper documents the results of a study that was conducted to determine the best approach for implementing a SDQS network configuration that spans multiple subsystems, similar to those that may be implemented on future vehicles. The best approach is defined as one that most minimizes computational resource requirements without impacting the detection of sensor failures.

  15. A definitional framework for the human/biometric sensor interaction model

    NASA Astrophysics Data System (ADS)

    Elliott, Stephen J.; Kukula, Eric P.

    2010-04-01

    Existing definitions for biometric testing and evaluation do not fully explain errors in a biometric system. This paper provides a definitional framework for the Human Biometric-Sensor Interaction (HBSI) model. This paper proposes six new definitions based around two classifications of presentations, erroneous and correct. The new terms are: defective interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract (FTX), and successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general biometric model developed by Mansfield and Wayman [1].

  16. Focus on Mechanical Failures: Mechanisms and Detection. Proceedings of the Meeting (45th) of the Mechanical Failures Prevention Group Held in Annapolis, Maryland on April 9 - 11, 1999

    DTIC Science & Technology

    1991-04-04

    solution to this immediate problem and, as the technology developed, opened doors to applied tribology for advanced maintenance through Mechanical Systems...Integrity Management. The development of other technologies as well enhanced Spectron’s capability, but it was the major advances in electronics and...strain gages will also be studied. The results of this program will provide a basis for future work in the area of advanced sensor technology . ONCUBSIONS

  17. Event Detection in Aerospace Systems using Centralized Sensor Networks: A Comparative Study of Several Methodologies

    NASA Technical Reports Server (NTRS)

    Mehr, Ali Farhang; Sauvageon, Julien; Agogino, Alice M.; Tumer, Irem Y.

    2006-01-01

    Recent advances in micro electromechanical systems technology, digital electronics, and wireless communications have enabled development of low-cost, low-power, multifunctional miniature smart sensors. These sensors can be deployed throughout a region in an aerospace vehicle to build a network for measurement, detection and surveillance applications. Event detection using such centralized sensor networks is often regarded as one of the most promising health management technologies in aerospace applications where timely detection of local anomalies has a great impact on the safety of the mission. In this paper, we propose to conduct a qualitative comparison of several local event detection algorithms for centralized redundant sensor networks. The algorithms are compared with respect to their ability to locate and evaluate an event in the presence of noise and sensor failures for various node geometries and densities.

  18. Transmission Bearing Damage Detection Using Decision Fusion Analysis

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Lewicki, David G.; Decker, Harry J.

    2004-01-01

    A diagnostic tool was developed for detecting fatigue damage to rolling element bearings in an OH-58 main rotor transmission. Two different monitoring technologies, oil debris analysis and vibration, were integrated using data fusion into a health monitoring system for detecting bearing surface fatigue pitting damage. This integrated system showed improved detection and decision-making capabilities as compared to using individual monitoring technologies. This diagnostic tool was evaluated by collecting vibration and oil debris data from tests performed in the NASA Glenn 500 hp Helicopter Transmission Test Stand. Data was collected during experiments performed in this test rig when two unanticipated bearing failures occurred. Results show that combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spiral bevel gears duplex ball bearings and spiral bevel pinion triplex ball bearings in a main rotor transmission.

  19. Robust fault detection of turbofan engines subject to adaptive controllers via a Total Measurable Fault Information Residual (ToMFIR) technique.

    PubMed

    Chen, Wen; Chowdhury, Fahmida N; Djuric, Ana; Yeh, Chih-Ping

    2014-09-01

    This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Using Wireless Sensor Networks in Improvised Explosive Device Detection

    DTIC Science & Technology

    2007-12-01

    data collection (permitting self - healing when a node failure occurs); Sensor nodes Gateway nodes 24 • Energy efficiency (necessary to maintain...Runner” robotic platform (see Figure 1). It is reported that this system can detect a wide range of IEDs, even those concealed in vehicles. However...be as simple as running over a rubber hose to produce enough air pressure to activate a switch. Some IEDs have been remotely detonated with radio

  1. A Voyager attitude control perspective on fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. D.; Litty, E. C.

    1981-01-01

    In current spacecraft design, a trend can be observed to achieve greater fault tolerance through the application of on-board software dedicated to detecting and isolating failures. Whether fault tolerance through software can meet the desired objectives depends on very careful consideration and control of the system in which the software is imbedded. The considered investigation has the objective to provide some of the insight needed for the required analysis of the system. A description is given of the techniques which have been developed in this connection during the development of the Voyager spacecraft. The Voyager Galileo Attitude and Articulation Control Subsystem (AACS) fault tolerant design is discussed to emphasize basic lessons learned from this experience. The central driver of hardware redundancy implementation on Voyager was known as the 'single point failure criterion'.

  2. Real-time automated failure identification in the Control Center Complex (CCC)

    NASA Technical Reports Server (NTRS)

    Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James

    1993-01-01

    A system which will provide real-time failure management support to the Space Station Freedom program is described. The system's use of a simplified form of model based reasoning qualifies it as an advanced automation system. However, it differs from most such systems in that it was designed from the outset to meet two sets of requirements. First, it must provide a useful increment to the fault management capabilities of the Johnson Space Center (JSC) Control Center Complex (CCC) Fault Detection Management system. Second, it must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation, etc. The need to meet both requirement sets presents a much greater design challenge than would have been the case had functionality been the sole design consideration. The choice of technology, discussing aspects of that choice and the process for migrating it into the control center is overviewed.

  3. Statistical fault diagnosis of wind turbine drivetrain applied to a 5MW floating wind turbine

    NASA Astrophysics Data System (ADS)

    Ghane, Mahdi; Nejad, Amir R.; Blanke, Mogens; Gao, Zhen; Moan, Torgeir

    2016-09-01

    Deployment of large scale wind turbine parks, in particular offshore, requires well organized operation and maintenance strategies to make it as competitive as the classical electric power stations. It is important to ensure systems are safe, profitable, and cost-effective. In this regards, the ability to detect, isolate, estimate, and prognose faults plays an important role. One of the critical wind turbine components is the gearbox. Failures in the gearbox are costly both due to the cost of the gearbox itself and also due to high repair downtime. In order to detect faults as fast as possible to prevent them to develop into failure, statistical change detection is used in this paper. The Cumulative Sum Method (CUSUM) is employed to detect possible defects in the downwind main bearing. A high fidelity gearbox model on a 5-MW spar-type wind turbine is used to generate data for fault-free and faulty conditions of the bearing at the rated wind speed and the associated wave condition. Acceleration measurements are utilized to find residuals used to indirectly detect damages in the bearing. Residuals are found to be nonGaussian, following a t-distribution with multivariable characteristic parameters. The results in this paper show how the diagnostic scheme can detect change with desired false alarm and detection probabilities.

  4. 22nd Annual Logistics Conference and Exhibition

    DTIC Science & Technology

    2006-04-20

    Prognostics & Health Management at GE Dr. Piero P.Bonissone Industrial AI Lab GE Global Research NCD Select detection model Anomaly detection results...Mode 213 x Failure mode histogram 2130014 Anomaly detection from event-log data Anomaly detection from event-log data Diagnostics/ Prognostics Using...Failure Monitoring & AssessmentTactical C4ISR Sense Respond 7 •Diagnostics, Prognostics and health management

  5. Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results

    NASA Technical Reports Server (NTRS)

    Glass, B. J. (Editor)

    1992-01-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  6. Thermal Expert System (TEXSYS): Systems automony demonstration project, volume 1. Overview

    NASA Technical Reports Server (NTRS)

    Glass, B. J. (Editor)

    1992-01-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS test bed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  7. Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results

    NASA Astrophysics Data System (ADS)

    Glass, B. J.

    1992-10-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  8. Visual Processing of Object Velocity and Acceleration

    DTIC Science & Technology

    1994-02-04

    A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364

  9. Rule-Based Relaxation of Reference Identification Failures. Technical Report No. 396.

    ERIC Educational Resources Information Center

    Goodman, Bradley A.

    In a step toward creating a robust natural language understanding system which detects and avoids miscommunication, this artificial intelligence research report provides a taxonomy of miscommunication problems that arise in expert-apprentice dialogues (including misunderstandings, wrong communication, and bad analogies), and proposes a flexible…

  10. Beyond human error taxonomies in assessment of risk in sociotechnical systems: a new paradigm with the EAST 'broken-links' approach.

    PubMed

    Stanton, Neville A; Harvey, Catherine

    2017-02-01

    Risk assessments in Sociotechnical Systems (STS) tend to be based on error taxonomies, yet the term 'human error' does not sit easily with STS theories and concepts. A new break-link approach was proposed as an alternative risk assessment paradigm to reveal the effect of information communication failures between agents and tasks on the entire STS. A case study of the training of a Royal Navy crew detecting a low flying Hawk (simulating a sea-skimming missile) is presented using EAST to model the Hawk-Frigate STS in terms of social, information and task networks. By breaking 19 social links and 12 task links, 137 potential risks were identified. Discoveries included revealing the effect of risk moving around the system; reducing the risks to the Hawk increased the risks to the Frigate. Future research should examine the effects of compounded information communication failures on STS performance. Practitioner Summary: The paper presents a step-by-step walk-through of EAST to show how it can be used for risk assessment in sociotechnical systems. The 'broken-links' method takes a systemic, rather than taxonomic, approach to identify information communication failures in social and task networks.

  11. Back Propagation Artificial Neural Network and Its Application in Fault Detection of Condenser Failure in Thermo Plant

    NASA Astrophysics Data System (ADS)

    Ismail, Firas B.; Thiruchelvam, Vinesh

    2013-06-01

    Steam condenser is one of the most important equipment in steam power plants. If the steam condenser trips it may lead to whole unit shutdown, which is economically burdensome. Early condenser trips monitoring is crucial to maintain normal and safe operational conditions. In the present work, artificial intelligent monitoring systems specialized in condenser outages has been proposed and coded within the MATLAB environment. The training and validation of the system has been performed using real operational measurements captured from the control system of selected steam power plant. An integrated plant data preparation scheme for condenser outages with related operational variables has been proposed. Condenser outages under consideration have been detected by developed system before the plant control system"

  12. WE-H-BRC-09: Simulated Errors in Mock Radiotherapy Plans to Quantify the Effectiveness of the Physics Plan Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopan, O; Kalet, A; Smith, W

    2016-06-15

    Purpose: A standard tool for ensuring the quality of radiation therapy treatments is the initial physics plan review. However, little is known about its performance in practice. The goal of this study is to measure the effectiveness of physics plan review by introducing simulated errors into “mock” treatment plans and measuring the performance of plan review by physicists. Methods: We generated six mock treatment plans containing multiple errors. These errors were based on incident learning system data both within the department and internationally (SAFRON). These errors were scored for severity and frequency. Those with the highest scores were included inmore » the simulations (13 errors total). Observer bias was minimized using a multiple co-correlated distractor approach. Eight physicists reviewed these plans for errors, with each physicist reviewing, on average, 3/6 plans. The confidence interval for the proportion of errors detected was computed using the Wilson score interval. Results: Simulated errors were detected in 65% of reviews [51–75%] (95% confidence interval [CI] in brackets). The following error scenarios had the highest detection rates: incorrect isocenter in DRRs/CBCT (91% [73–98%]) and a planned dose different from the prescribed dose (100% [61–100%]). Errors with low detection rates involved incorrect field parameters in record and verify system (38%, [18–61%]) and incorrect isocenter localization in planning system (29% [8–64%]). Though pre-treatment QA failure was reliably identified (100%), less than 20% of participants reported the error that caused the failure. Conclusion: This is one of the first quantitative studies of error detection. Although physics plan review is a key safety measure and can identify some errors with high fidelity, others errors are more challenging to detect. This data will guide future work on standardization and automation. Creating new checks or improving existing ones (i.e., via automation) will help in detecting those errors with low detection rates.« less

  13. An expert system to perform on-line controller restructuring for abrupt model changes

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    1990-01-01

    Work in progress on an expert system used to reconfigure and tune airframe/engine control systems on-line in real time in response to battle damage or structural failures is presented. The closed loop system is monitored constantly for changes in structure and performance, the detection of which prompts the expert system to choose and apply a particular control restructuring algorithm based on the type and severity of the damage. Each algorithm is designed to handle specific types of failures and each is applicable only in certain situations. The expert system uses information about the system model to identify the failure and to select the technique best suited to compensate for it. A depth-first search is used to find a solution. Once a new controller is designed and implemented it must be tuned to recover the original closed-loop handling qualities and responsiveness from the degraded system. Ideally, the pilot should not be able to tell the difference between the original and redesigned systems. The key is that the system must have inherent redundancy so that degraded or missing capabilities can be restored by creative use of alternate functionalities. With enough redundancy in the control system, minor battle damage affecting individual control surfaces or actuators, compressor efficiency, etc., can be compensated for such that the closed-loop performance in not noticeably altered. The work is applied to a Black Hawk/T700 system.

  14. Towards eradication of inappropriate therapies for ICD lead failure by combining comprehensive remote monitoring and lead noise alerts.

    PubMed

    Ploux, Sylvain; Swerdlow, Charles D; Strik, Marc; Welte, Nicolas; Klotz, Nicolas; Ritter, Philippe; Haïssaguerre, Michel; Bordachar, Pierre

    2018-06-02

    Recognition of implantable cardioverter defibrillator (ICD) lead malfunction before occurrence of life threatening complications is crucial. We aimed to assess the effectiveness of remote monitoring associated or not with a lead noise alert for early detection of ICD lead failure. From October 2013 to April 2017, a median of 1,224 (578-1,958) ICD patients were remotely monitored with comprehensive analysis of all transmitted materials. ICD lead failure and subsequent device interventions were prospectively collected in patients with (RMLN) and without (RM) a lead noise alert (Abbott Secure Sense™ or Medtronic Lead Integrity Alert™) in their remote monitoring system. During a follow-up of 4,457 patient years, 64 lead failures were diagnosed. Sixty-one (95%) of the diagnoses were made before any clinical complication occurred. Inappropriate shocks were delivered in only one patient of each group (3%), with an annual rate of 0.04%. All high voltage conductor failures were identified remotely by a dedicated impedance alert in 10 patients. Pace-sense component failures were correctly identified by a dedicated alert in 77% (17 of 22) of the RMLN group versus 25% (8 of 32) of the RM group (P = 0.002). The absence of a lead noise alert was associated with a 16-fold increase in the likelihood of initiating either a shock or ATP (OR: 16.0, 95% CI 1.8-143.3; P = 0.01). ICD remote monitoring with systematic review of all transmitted data is associated with a very low rate of inappropriate shocks related to lead failure. Dedicated noise alerts further reduce inappropriate detection of ventricular arrhythmias. © 2018 Wiley Periodicals, Inc.

  15. Launch Vehicle Abort Analysis for Failures Leading to Loss of Control

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Hill, Ashley D.; Beard, Bernard B.

    2013-01-01

    Launch vehicle ascent is a time of high risk for an onboard crew. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based on data already available from the Guidance, Navigation, and Control system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. The two primary areas of focus are the derivation of abort triggers to ensure that abort occurs as quickly as possible when needed, but that false aborts are avoided, and evaluation of success in aborting off the failing launch vehicle.

  16. Module failure isolation circuit for paralleled inverters. [preventing system failure during power conditioning for spacecraft applications

    NASA Technical Reports Server (NTRS)

    Nagano, S. (Inventor)

    1979-01-01

    A module failure isolation circuit is described which senses and averages the collector current of each paralled inverter power transistor and compares the collector current of each power transistor the average collector current of all power transistors to determine when the sensed collector current of a power transistor in any one inverter falls below a predetermined ratio of the average collector current. The module associated with any transistor that fails to maintain a current level above the predetermined radio of the average collector current is then shut off. A separate circuit detects when there is no load, or a light load, to inhibit operation of the isolation circuit during no load or light load conditions.

  17. High-Speed Observer: Automated Streak Detection in SSME Plumes

    NASA Technical Reports Server (NTRS)

    Rieckoff, T. J.; Covan, M.; OFarrell, J. M.

    2001-01-01

    A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.

  18. Fault detection and accommodation testing on an F100 engine in an F-15 airplane. [digital engine control system

    NASA Technical Reports Server (NTRS)

    Myers, L. P.; Baer-Riedhart, J. L.; Maxwell, M. D.

    1985-01-01

    The fault detection and accommodation (FDA) methods that can be used for digital engine control systems are presently subjected to a flight test program in the case of the F-15 fighter's F100 engine electronic controls, inducing selected faults and then evaluating the resulting digital engine control responses. In general, flight test results were found to compare well with both ground tests and predictions. It is noted that the inducement of dual-pressure failures was not feasible, since FDA logic was not designed to accommodate them.

  19. Automatic crack detection method for loaded coal in vibration failure process

    PubMed Central

    Li, Chengwu

    2017-01-01

    In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM). A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically. PMID:28973032

  20. Automatic crack detection method for loaded coal in vibration failure process.

    PubMed

    Li, Chengwu; Ai, Dihao

    2017-01-01

    In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM). A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.

  1. Crack detection and fatigue related delamination in FRP composites applied to concrete

    NASA Astrophysics Data System (ADS)

    Brown, Jeff; Baker, Rebecca; Kallemeyn, Lisa; Zendler, Andrew

    2008-03-01

    Reinforced concrete beams are designed to allow minor concrete cracking in the tension zone. The severity of cracking in a beam element is a good indicator of how well a structure is performing and whether or not repairs are needed to prevent structural failure. FRP composites are commonly used to increase the flexural and shear capacity of RC beam elements, but one potential disadvantage of this method is that strengthened surfaces are no longer visible and cracks or delaminations that result from excessive loading or fatigue may go undetected. This research investigated thermal imaging techniques for detecting load induced cracking in the concrete substrate and delamination of FRP strengthening systems applied to reinforced concrete (RC). One small-scale RC beam (5 in. x 6 in. x 60 in.) was strengthened with FRP and loaded to failure monotonically. An infrared thermography inspection was performed after failure. A second strengthened beam was loaded cyclically for 1,750,000 cycles to investigate how fatigue might affect substrate cracking and delamination growth throughout the service-life of a repaired element. No changes were observed in the FRP bond during/after the cyclic loading. The thermal imaging component of this research included pixel normalization to enhance detectability and characterization of this specific type of damage.

  2. Expert system for online surveillance of nuclear reactor coolant pumps

    DOEpatents

    Gross, Kenny C.; Singer, Ralph M.; Humenik, Keith E.

    1993-01-01

    An expert system for online surveillance of nuclear reactor coolant pumps. This system provides a means for early detection of pump or sensor degradation. Degradation is determined through the use of a statistical analysis technique, sequential probability ratio test, applied to information from several sensors which are responsive to differing physical parameters. The results of sequential testing of the data provide the operator with an early warning of possible sensor or pump failure.

  3. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.

  4. EGFR mutation status of paired cerebrospinal fluid and plasma samples in EGFR mutant non-small cell lung cancer with leptomeningeal metastases.

    PubMed

    Zhao, Jing; Ye, Xin; Xu, Yan; Chen, Minjiang; Zhong, Wei; Sun, Yun; Yang, Zhenfan; Zhu, Guanshan; Gu, Yi; Wang, Mengzhao

    2016-12-01

    Central nervous system (CNS) is the prevalent site for metastases in epidermal growth factor receptor (EGFR) tyrosine kinase inhibitor (TKI)-relapsed NSCLC patients. To understand the EGFR mutation status in paired cerebrospinal fluid (CSF) and plasma samples after EGFR-TKI treatment failure might be useful to guide the treatment of intra- and extracranial tumors in those patients. Paired CSF and plasma samples were collected from seven NSCLC patients with CNS metastases after EGFR-TKI failure. EGFR mutations were tested by amplification refractory mutation system (ARMS) and droplet digital PCR (ddPCR) methods. Gefitinib concentrations were evaluated by high-performance liquid chromatography-mass spectrometry (HPLC-MS/MS). EGFR mutations were detected in all seven CSF samples, including three of E19-Del, three of L858R and one of E19-Del&T790M by both methods. On the other hand, majority of the matched plasma samples (5/7) were negative for EGFR mutations by both methods. The other two plasma samples were positive for E19-Del&T790M by ddPCR, and one of them had undetectable T790M by ARMS. Gefitinib concentration in CSF was much lower than that in plasma (mean CSF/plasma ratio: 1.8 %). After EGFR-TKI failure, majority of the NSCLC patients with CNS metastases remained positive detection of EGFR sensitive mutations in CSF, but much less detection in the matched plasma. Significantly low exposure of gefitinib in CSF might explain the intracranial protection of the EGFR sensitive mutation positive tumor cells.

  5. Detection of Failure in Asynchronous Motor Using Soft Computing Method

    NASA Astrophysics Data System (ADS)

    Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.

    2018-04-01

    This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.

  6. SU-F-P-07: Applying Failure Modes and Effects Analysis to Treatment Planning System QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Alaei, P

    2016-06-15

    Purpose: A small-scale implementation of Failure Modes and Effects Analysis (FMEA) for treatment planning system QA by utilizing methodology of AAPM TG-100 report. Methods: FMEA requires numerical values for severity (S), occurrence (O) and detectability (D) of each mode of failure. The product of these three values gives a risk priority number (RPN). We have implemented FMEA for the treatment planning system (TPS) QA for two clinics which use Pinnacle and Eclipse TPS. Quantitative monthly QA data dating back to 4 years for Pinnacle and 1 year for Eclipse have been used to determine values for severity (deviations from predeterminedmore » doses at points or volumes), and occurrence of such deviations. The TPS QA protocol includes a phantom containing solid water and lung- and bone-equivalent heterogeneities. Photon and electron plans have been evaluated in both systems. The dose values at multiple distinct points of interest (POI) within the solid water, lung, and bone-equivalent slabs, as well as mean doses to several volumes of interest (VOI), have been re-calculated monthly using the available algorithms. Results: The computed doses vary slightly month-over-month. There have been more significant deviations following software upgrades, especially if the upgrade involved re-modeling of the beams. TG-100 guidance and the data presented here suggest an occurrence (O) of 2 depending on the frequency of re-commissioning the beams, severity (S) of 3, and detectability (D) of 2, giving an RPN of 12. Conclusion: Computerized treatment planning systems could pose a risk due to dosimetric errors and suboptimal treatment plans. The FMEA analysis presented here suggests that TPS QA should immediately follow software upgrades, but does not need to be performed every month.« less

  7. A novel strategy for rapid detection of NT-proBNP

    NASA Astrophysics Data System (ADS)

    Cui, Qiyao; Sun, Honghao; Zhu, Hui

    2017-09-01

    In order to establish a simple, rapid, sensitive, and specific quantitative assay to detect the biomarkers of heart failure, in this study, biotin-streptavidin technology was employed with fluorescence immunochromatographic assay to detect the concentration of the biomarkers in serum, and this method was applied to detect NT-proBNP, which is valuable for diagnostic evaluation of heart failure.

  8. A Multiple Sensor Machine Vision System Technology for the Hardwood

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman

    1995-01-01

    For the last few years the authors have been extolling the virtues of a multiple sensor approach to hardwood defect detection. Since 1989 the authors have actively been trying to develop such a system. This paper details some of the successes and failures that have been experienced to date. It also discusses what remains to be done and gives time lines for the...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Sisi; Li, Yun; Levitt, Karl N.

    Consensus is a fundamental approach to implementing fault-tolerant services through replication where there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing systemmore » and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.« less

  10. Reference and Reference Failures. Technical Report No. 398.

    ERIC Educational Resources Information Center

    Goodman, Bradley A.

    In order to build robust natural language processing systems that can detect and recover from miscommunication, the investigation of how people communicate and how they recover from problems in communication described in this artificial intelligence report focused on reference problems which a listener may have in determining what or whom a…

  11. Fire protection for a Martian colony

    NASA Astrophysics Data System (ADS)

    Beattie, Robert M., Jr.

    The fire prevention failures that occurred in Apollo 1 and Challenger accidents are reviewed and used to discuss fire protection measures that should be taken in a Martian colony. Fire detection systems, classes of fire, and suppression agents are described. The organization of fire fighting personnel appropriate for Mars is addressed.

  12. Integrative Assessment of Congestion in Heart Failure Throughout the Patient Journey.

    PubMed

    Girerd, Nicolas; Seronde, Marie-France; Coiro, Stefano; Chouihed, Tahar; Bilbault, Pascal; Braun, François; Kenizou, David; Maillier, Bruno; Nazeyrollas, Pierre; Roul, Gérard; Fillieux, Ludivine; Abraham, William T; Januzzi, James; Sebbag, Laurent; Zannad, Faiez; Mebazaa, Alexandre; Rossignol, Patrick

    2018-04-01

    Congestion is one of the main predictors of poor patient outcome in patients with heart failure. However, congestion is difficult to assess, especially when symptoms are mild. Although numerous clinical scores, imaging tools, and biological tests are available to assist physicians in ascertaining and quantifying congestion, not all are appropriate for use in all stages of patient management. In recent years, multidisciplinary management in the community has become increasingly important to prevent heart failure hospitalizations. Electronic alert systems and communication platforms are emerging that could be used to facilitate patient home monitoring that identifies congestion from heart failure decompensation at an earlier stage. This paper describes the role of congestion detection methods at key stages of patient care: pre-admission, admission to the emergency department, in-hospital management, and lastly, discharge and continued monitoring in the community. The multidisciplinary working group, which consisted of cardiologists, emergency physicians, and a nephrologist with both clinical and research backgrounds, reviewed the current literature regarding the various scores, tools, and tests to detect and quantify congestion. This paper describes the role of each tool at key stages of patient care and discusses the advantages of telemedicine as a means of providing true integrated patient care. Copyright © 2018. Published by Elsevier Inc.

  13. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  14. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  15. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  16. Launch Vehicle Failure Dynamics and Abort Triggering Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Hill, Ashely D.; Beard, Bernard B.

    2011-01-01

    Launch vehicle ascent is a time of high risk for an on-board crew. There are many types of failures that can kill the crew if the crew is still on-board when the failure becomes catastrophic. For some failure scenarios, there is plenty of time for the crew to be warned and to depart, whereas in some there is insufficient time for the crew to escape. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based primarily on data already available from the GN&C system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. Derivation of attitude and attitude rate abort triggers to ensure that abort occurs as quickly as possible when needed, but that false positives are avoided, forms a major portion of the paper. Some of the potential failure modes requiring use of these triggers are described, along with analysis used to determine the success rate of getting the crew off prior to vehicle demise.

  17. Failure mode and effects analysis drastically reduced potential risks in clinical trial conduct.

    PubMed

    Lee, Howard; Lee, Heechan; Baik, Jungmi; Kim, Hyunjung; Kim, Rachel

    2017-01-01

    Failure mode and effects analysis (FMEA) is a risk management tool to proactively identify and assess the causes and effects of potential failures in a system, thereby preventing them from happening. The objective of this study was to evaluate effectiveness of FMEA applied to an academic clinical trial center in a tertiary care setting. A multidisciplinary FMEA focus group at the Seoul National University Hospital Clinical Trials Center selected 6 core clinical trial processes, for which potential failure modes were identified and their risk priority number (RPN) was assessed. Remedial action plans for high-risk failure modes (RPN >160) were devised and a follow-up RPN scoring was conducted a year later. A total of 114 failure modes were identified with an RPN score ranging 3-378, which was mainly driven by the severity score. Fourteen failure modes were of high risk, 11 of which were addressed by remedial actions. Rescoring showed a dramatic improvement attributed to reduction in the occurrence and detection scores by >3 and >2 points, respectively. FMEA is a powerful tool to improve quality in clinical trials. The Seoul National University Hospital Clinical Trials Center is expanding its FMEA capability to other core clinical trial processes.

  18. Adaptive backstepping fault-tolerant control for flexible spacecraft with unknown bounded disturbances and actuator failures.

    PubMed

    Jiang, Ye; Hu, Qinglei; Ma, Guangfu

    2010-01-01

    In this paper, a robust adaptive fault-tolerant control approach to attitude tracking of flexible spacecraft is proposed for use in situations when there are reaction wheel/actuator failures, persistent bounded disturbances and unknown inertia parameter uncertainties. The controller is designed based on an adaptive backstepping sliding mode control scheme, and a sufficient condition under which this control law can render the system semi-globally input-to-state stable is also provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. Moreover, in the design, the control law does not need a fault detection and isolation mechanism even if the failure time instants, patterns and values on actuator failures are also unknown for the designers, as motivated from a practical spacecraft control application. In addition to detailed derivations of the new controller design and a rigorous sketch of all the associated stability and attitude error convergence proofs, illustrative simulation results of an application to flexible spacecraft show that high precise attitude control and vibration suppression are successfully achieved using various scenarios of controlling effective failures. 2009. Published by Elsevier Ltd.

  19. Rolex: Resilience-oriented language extensions for extreme-scale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert F.; Hukerikar, Saurabh

    Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less

  20. Rolex: Resilience-oriented language extensions for extreme-scale systems

    DOE PAGES

    Lucas, Robert F.; Hukerikar, Saurabh

    2016-05-26

    Future exascale high-performance computing (HPC) systems will be constructed from VLSI devices that will be less reliable than those used today, and faults will become the norm, not the exception. This will pose significant problems for system designers and programmers, who for half-a-century have enjoyed an execution model that assumed correct behavior by the underlying computing system. The mean time to failure (MTTF) of the system scales inversely to the number of components in the system and therefore faults and resultant system level failures will increase, as systems scale in terms of the number of processor cores and memory modulesmore » used. However every error detected need not cause catastrophic failure. Many HPC applications are inherently fault resilient. Yet it is the application programmers who have this knowledge but lack mechanisms to convey it to the system. In this paper, we present new Resilience Oriented Language Extensions (Rolex) which facilitate the incorporation of fault resilience as an intrinsic property of the application code. We describe the syntax and semantics of the language extensions as well as the implementation of the supporting compiler infrastructure and runtime system. Furthermore, our experiments show that an approach that leverages the programmer's insight to reason about the context and significance of faults to the application outcome significantly improves the probability that an application runs to a successful conclusion.« less

  1. Road Anomalies Detection System Evaluation.

    PubMed

    Silva, Nuno; Shah, Vaibhav; Soares, João; Rodrigues, Helena

    2018-06-21

    Anomalies on road pavement cause discomfort to drivers and passengers, and may cause mechanical failure or even accidents. Governments spend millions of Euros every year on road maintenance, often causing traffic jams and congestion on urban roads on a daily basis. This paper analyses the difference between the deployment of a road anomalies detection and identification system in a “conditioned” and a real world setup, where the system performed worse compared to the “conditioned” setup. It also presents a system performance analysis based on the analysis of the training data sets; on the analysis of the attributes complexity, through the application of PCA techniques; and on the analysis of the attributes in the context of each anomaly type, using acceleration standard deviation attributes to observe how different anomalies classes are distributed in the Cartesian coordinates system. Overall, in this paper, we describe the main insights on road anomalies detection challenges to support the design and deployment of a new iteration of our system towards the deployment of a road anomaly detection service to provide information about roads condition to drivers and government entities.

  2. Anomaly Detection in Power Quality at Data Centers

    NASA Technical Reports Server (NTRS)

    Grichine, Art; Solano, Wanda M.

    2015-01-01

    The goal during my internship at the National Center for Critical Information Processing and Storage (NCCIPS) is to implement an anomaly detection method through the StruxureWare SCADA Power Monitoring system. The benefit of the anomaly detection mechanism is to provide the capability to detect and anticipate equipment degradation by monitoring power quality prior to equipment failure. First, a study is conducted that examines the existing techniques of power quality management. Based on these findings, and the capabilities of the existing SCADA resources, recommendations are presented for implementing effective anomaly detection. Since voltage, current, and total harmonic distortion demonstrate Gaussian distributions, effective set-points are computed using this model, while maintaining a low false positive count.

  3. Feasibility of an on-line fission-gas-leak detection system

    NASA Technical Reports Server (NTRS)

    Lustig, P. H.

    1973-01-01

    Calculations were made to determine if a cladding failure could be detected in a 100-kW zirconium hydride reactor primary system by monitoring the highly radioactive NaK coolant for the presence of I-131. The system is to be completely sealed. A leak of 0.01 percent from a single fuel pin was postulated. The 0.364-MeV gamma of I-131 could be monitored on an almost continuous basis, while its presence could be varified by using a longer counting time for the 0.638-MeV gamma. A lithium-drifted germanium detector would eliminate radioactive corrosion product interference that could occur with a sodium iodide scintillation detector.

  4. Speedy routing recovery protocol for large failure tolerance in wireless sensor networks.

    PubMed

    Lee, Joa-Hyoung; Jung, In-Bum

    2010-01-01

    Wireless sensor networks are expected to play an increasingly important role in data collection in hazardous areas. However, the physical fragility of a sensor node makes reliable routing in hazardous areas a challenging problem. Because several sensor nodes in a hazardous area could be damaged simultaneously, the network should be able to recover routing after node failures over large areas. Many routing protocols take single-node failure recovery into account, but it is difficult for these protocols to recover the routing after large-scale failures. In this paper, we propose a routing protocol, referred to as ARF (Adaptive routing protocol for fast Recovery from large-scale Failure), to recover a network quickly after failures over large areas. ARF detects failures by counting the packet losses from parent nodes, and upon failure detection, it decreases the routing interval to notify the neighbor nodes of the failure. Our experimental results indicate that ARF could provide recovery from large-area failures quickly with less packets and energy consumption than previous protocols.

  5. Early and simple detection of diastolic dysfunction during weaning from mechanical ventilation

    PubMed Central

    2012-01-01

    Weaning from mechanical ventilation imposes additional work on the cardiovascular system and can provoke or unmask left ventricular diastolic dysfunction with consecutive pulmonary edema or systolic dysfunction with inadequate increase of cardiac output and unsuccessful weaning. Echocardiography, which is increasingly used for hemodynamic assessment of critically ill patients, allows differentiation between systolic and diastolic failure. For various reasons, transthoracic echocardiographic assessment was limited to patients with good echo visibility and to those with sinus rhythm without excessive tachycardia. In these patients, often selected after unsuccessful weaning, echocardiographic findings were predictive for weaning failure of cardiac origin. In some studies, patients with various degrees of systolic dysfunction were included, making evaluation of the diastolic dysfunction to the weaning failure even more difficult. The recent study by Moschietto and coworkers included unselected patients and used very simple diastolic variables for assessment of diastolic function. They also included patients with atrial fibrillation and repeated echocardiographic examination only 10 minutes after starting a spontaneous breathing trial. The main finding was that weaning failure was not associated with systolic dysfunction but with diastolic dysfunction. By measuring simple and robust parameters for detection of diastolic dysfunction, the study was able to predict weaning failure in patients with sinus rhythm and atrial fibrillation as early as 10 minutes after beginning a spontaneous breathing trial. Further studies are necessary to determine whether appropriate treatment tailored according to the echocardiographic findings will result in successful weaning. PMID:22770365

  6. Early and simple detection of diastolic dysfunction during weaning from mechanical ventilation.

    PubMed

    Voga, Gorazd

    2012-07-06

    Weaning from mechanical ventilation imposes additional work on the cardiovascular system and can provoke or unmask left ventricular diastolic dysfunction with consecutive pulmonary edema or systolic dysfunction with inadequate increase of cardiac output and unsuccessful weaning. Echocardiography, which is increasingly used for hemodynamic assessment of critically ill patients, allows differentiation between systolic and diastolic failure. For various reasons, transthoracic echocardiographic assessment was limited to patients with good echo visibility and to those with sinus rhythm without excessive tachycardia. In these patients, often selected after unsuccessful weaning, echocardiographic findings were predictive for weaning failure of cardiac origin. In some studies, patients with various degrees of systolic dysfunction were included, making evaluation of the diastolic dysfunction to the weaning failure even more difficult. The recent study by Moschietto and coworkers included unselected patients and used very simple diastolic variables for assessment of diastolic function. They also included patients with atrial fibrillation and repeated echocardiographic examination only 10 minutes after starting a spontaneous breathing trial. The main finding was that weaning failure was not associated with systolic dysfunction but with diastolic dysfunction. By measuring simple and robust parameters for detection of diastolic dysfunction, the study was able to predict weaning failure in patients with sinus rhythm and atrial fibrillation as early as 10 minutes after beginning a spontaneous breathing trial. Further studies are necessary to determine whether appropriate treatment tailored according to the echocardiographic findings will result in successful weaning.

  7. Academic-Community Hospital Comparison of Vulnerabilities in Door-to-Needle Process for Acute Ischemic Stroke.

    PubMed

    Prabhakaran, Shyam; Khorzad, Rebeca; Brown, Alexandra; Nannicelli, Anna P; Khare, Rahul; Holl, Jane L

    2015-10-01

    Although best practices have been developed for achieving door-to-needle (DTN) times ≤60 minutes for stroke thrombolysis, critical DTN process failures persist. We sought to compare these failures in the Emergency Department at an academic medical center and a community hospital. Failure modes effects and criticality analysis was used to identify system and process failures. Multidisciplinary teams involved in DTN care participated in moderated sessions at each site. As a result, DTN process maps were created and potential failures and their causes, frequency, severity, and existing safeguards were identified. For each failure, a risk priority number and criticality score were calculated; failures were then ranked, with the highest scores representing the most critical failures and targets for intervention. We detected a total of 70 failures in 50 process steps and 76 failures in 42 process steps at the community hospital and academic medical center, respectively. At the community hospital, critical failures included (1) delay in registration because of Emergency Department overcrowding, (2) incorrect triage diagnosis among walk-in patients, and (3) delay in obtaining consent for thrombolytic treatment. At the academic medical center, critical failures included (1) incorrect triage diagnosis among walk-in patients, (2) delay in stroke team activation, and (3) delay in obtaining computed tomographic imaging. Although the identification of common critical failures suggests opportunities for a generalizable process redesign, differences in the criticality and nature of failures must be addressed at the individual hospital level, to develop robust and sustainable solutions to reduce DTN time. © 2015 American Heart Association, Inc.

  8. A flexible home monitoring platform for patients affected by chronic heart failure directly integrated with the remote Hospital Information System

    NASA Astrophysics Data System (ADS)

    Donati, Massimiliano; Bacchillone, Tony; Saponara, Sergio; Fanucci, Luca

    2011-05-01

    Today Chronic Heart Failure (CHF) represents one of leading cause of hospitalization among chronic disease, especially for elderly citizens, with a consequent considerable impact on patient quality of life, resources congestion and healthcare costs for the National Sanitary System. The current healthcare model is mostly in-hospital based and consists of periodic visits, but unfortunately it does not allow to promptly detect exacerbations resulting in a large number of rehospitalization. Recently physicians and administrators identify telemonitoring systems as a strategy able to provide effective and cost efficient healthcare services for CHF patients, ensuring early diagnosis and treatments in case of necessity. This work presents a complete and integrated ICT solution to improve the management of chronic heart failure through the remote monitoring of vital signs at patient home, able to connect in-hospital care of acute syndrome with out-of-hospital follow-up. The proposed platform represents the patient's interface, acting as link between biomedical sensors and the data collection point at the Hospital Information System (HIS) in order to handle in transparent way the reception, analysis and forwarding of the main physiological parameters.

  9. Detecting Solenoid Valve Deterioration in In-Use Electronic Diesel Fuel Injection Control Systems

    PubMed Central

    Tsai, Hsun-Heng; Tseng, Chyuan-Yow

    2010-01-01

    The diesel engine is the main power source for most agricultural vehicles. The control of diesel engine emissions is an important global issue. Fuel injection control systems directly affect fuel efficiency and emissions of diesel engines. Deterioration faults, such as rack deformation, solenoid valve failure, and rack-travel sensor malfunction, are possibly in the fuel injection module of electronic diesel control (EDC) systems. Among these faults, solenoid valve failure is most likely to occur for in-use diesel engines. According to the previous studies, this failure is a result of the wear of the plunger and sleeve, based on a long period of usage, lubricant degradation, or engine overheating. Due to the difficulty in identifying solenoid valve deterioration, this study focuses on developing a sensor identification algorithm that can clearly classify the usability of the solenoid valve, without disassembling the fuel pump of an EDC system for in-use agricultural vehicles. A diagnostic algorithm is proposed, including a feedback controller, a parameter identifier, a linear variable differential transformer (LVDT) sensor, and a neural network classifier. Experimental results show that the proposed algorithm can accurately identify the usability of solenoid valves. PMID:22163597

  10. Detecting solenoid valve deterioration in in-use electronic diesel fuel injection control systems.

    PubMed

    Tsai, Hsun-Heng; Tseng, Chyuan-Yow

    2010-01-01

    The diesel engine is the main power source for most agricultural vehicles. The control of diesel engine emissions is an important global issue. Fuel injection control systems directly affect fuel efficiency and emissions of diesel engines. Deterioration faults, such as rack deformation, solenoid valve failure, and rack-travel sensor malfunction, are possibly in the fuel injection module of electronic diesel control (EDC) systems. Among these faults, solenoid valve failure is most likely to occur for in-use diesel engines. According to the previous studies, this failure is a result of the wear of the plunger and sleeve, based on a long period of usage, lubricant degradation, or engine overheating. Due to the difficulty in identifying solenoid valve deterioration, this study focuses on developing a sensor identification algorithm that can clearly classify the usability of the solenoid valve, without disassembling the fuel pump of an EDC system for in-use agricultural vehicles. A diagnostic algorithm is proposed, including a feedback controller, a parameter identifier, a linear variable differential transformer (LVDT) sensor, and a neural network classifier. Experimental results show that the proposed algorithm can accurately identify the usability of solenoid valves.

  11. Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott

    2008-01-01

    A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.

  12. Robust failure detection filters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sanmartin, A. M.

    1985-01-01

    The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.

  13. HIV resistance testing and detected drug resistance in Europe.

    PubMed

    Schultze, Anna; Phillips, Andrew N; Paredes, Roger; Battegay, Manuel; Rockstroh, Jürgen K; Machala, Ladislav; Tomazic, Janez; Girard, Pierre M; Januskevica, Inga; Gronborg-Laut, Kamilla; Lundgren, Jens D; Cozzi-Lepri, Alessandro

    2015-07-17

    To describe regional differences and trends in resistance testing among individuals experiencing virological failure and the prevalence of detected resistance among those individuals who had a genotypic resistance test done following virological failure. Multinational cohort study. Individuals in EuroSIDA with virological failure (>1 RNA measurement >500 on ART after >6 months on ART) after 1997 were included. Adjusted odds ratios (aORs) for resistance testing following virological failure and aORs for the detection of resistance among those who had a test were calculated using logistic regression with generalized estimating equations. Compared to 74.2% of ART-experienced individuals in 1997, only 5.1% showed evidence of virological failure in 2012. The odds of resistance testing declined after 2004 (global P < 0.001). Resistance was detected in 77.9% of the tests, NRTI resistance being most common (70.3%), followed by NNRTI (51.6%) and protease inhibitor (46.1%) resistance. The odds of detecting resistance were lower in tests done in 1997-1998, 1999-2000 and 2009-2010, compared to those carried out in 2003-2004 (global P < 0.001). Resistance testing was less common in Eastern Europe [aOR 0.72, 95% confidence interval (CI) 0.55-0.94] compared to Southern Europe, whereas the detection of resistance given that a test was done was less common in Northern (aOR 0.29, 95% CI 0.21-0.39) and Central Eastern (aOR 0.47, 95% CI 0.29-0.76) Europe, compared to Southern Europe. Despite a concurrent decline in virological failure and testing, drug resistance was commonly detected. This suggests a selective approach to resistance testing. The regional differences identified indicate that policy aiming to minimize the emergence of resistance is of particular relevance in some European regions, notably in the countries in Eastern Europe.

  14. Application of Failure Mode and Effect Analysis (FMEA), cause and effect analysis, and Pareto diagram in conjunction with HACCP to a corn curl manufacturing plant.

    PubMed

    Varzakas, Theodoros H; Arvanitoyannis, Ioannis S

    2007-01-01

    The Failure Mode and Effect Analysis (FMEA) model has been applied for the risk assessment of corn curl manufacturing. A tentative approach of FMEA application to the snacks industry was attempted in an effort to exclude the presence of GMOs in the final product. This is of crucial importance both from the ethics and the legislation (Regulations EC 1829/2003; EC 1830/2003; Directive EC 18/2001) point of view. The Preliminary Hazard Analysis and the Fault Tree Analysis were used to analyze and predict the occurring failure modes in a food chain system (corn curls processing plant), based on the functions, characteristics, and/or interactions of the ingredients or the processes, upon which the system depends. Critical Control points have been identified and implemented in the cause and effect diagram (also known as Ishikawa, tree diagram, and the fishbone diagram). Finally, Pareto diagrams were employed towards the optimization of GMOs detection potential of FMEA.

  15. Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2014-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less

  16. Stage Separation Failure: Model Based Diagnostics and Prognostics

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry; Hafiychuk, Vasyl; Kulikov, Igor; Smelyanskiy, Vadim; Patterson-Hine, Ann; Hanson, John; Hill, Ashley

    2010-01-01

    Safety of the next-generation space flight vehicles requires development of an in-flight Failure Detection and Prognostic (FD&P) system. Development of such system is challenging task that involves analysis of many hard hitting engineering problems across the board. In this paper we report progress in the development of FD&P for the re-contact fault between upper stage nozzle and the inter-stage caused by the first stage and upper stage separation failure. A high-fidelity models and analytical estimations are applied to analyze the following sequence of events: (i) structural dynamics of the nozzle extension during the impact; (ii) structural stability of the deformed nozzle in the presence of the pressure and temperature loads induced by the hot gas flow during engine start up; and (iii) the fault induced thrust changes in the steady burning regime. The diagnostic is based on the measurements of the impact torque. The prognostic is based on the analysis of the correlation between the actuator signal and fault-induced changes in the nozzle structural stability and thrust.

  17. Advanced Signal Conditioners for Data-Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Lucena, Angel; Perotti, Jose; Eckhoff, Anthony; Medelius, Pedro

    2004-01-01

    Signal conditioners embodying advanced concepts in analog and digital electronic circuitry and software have been developed for use in data-acquisition systems that are required to be compact and lightweight, to utilize electric energy efficiently, and to operate with high reliability, high accuracy, and high power efficiency, without intervention by human technicians. These signal conditioners were originally intended for use aboard spacecraft. There are also numerous potential terrestrial uses - especially in the fields of aeronautics and medicine, wherein it is necessary to monitor critical functions. Going beyond the usual analog and digital signal-processing functions of prior signal conditioners, the new signal conditioner performs the following additional functions: It continuously diagnoses its own electronic circuitry, so that it can detect failures and repair itself (as described below) within seconds. It continuously calibrates itself on the basis of a highly accurate and stable voltage reference, so that it can continue to generate accurate measurement data, even under extreme environmental conditions. It repairs itself in the sense that it contains a micro-controller that reroutes signals among redundant components as needed to maintain the ability to perform accurate and stable measurements. It detects deterioration of components, predicts future failures, and/or detects imminent failures by means of a real-time analysis in which, among other things, data on its present state are continuously compared with locally stored historical data. It minimizes unnecessary consumption of electric energy. The design architecture divides the signal conditioner into three main sections: an analog signal section, a digital module, and a power-management section. The design of the analog signal section does not follow the traditional approach of ensuring reliability through total redundancy of hardware: Instead, following an approach called spare parts tool box, the reliability of each component is assessed in terms of such considerations as risks of damage, mean times between failures, and the effects of certain failures on the performance of the signal conditioner as a whole system. Then, fewer or more spares are assigned for each affected component, pursuant to the results of this analysis, in order to obtain the required degree of reliability of the signal conditioner as a whole system. The digital module comprises one or more processors and field-programmable gate arrays, the number of each depending on the results of the aforementioned analysis. The digital module provides redundant control, monitoring, and processing of several analog signals. It is designed to minimize unnecessary consumption of electric energy, including, when possible, going into a low-power "sleep" mode that is implemented in firmware. The digital module communicates with external equipment via a personal-computer serial port. The digital module monitors the "health" of the rest of the signal conditioner by processing defined measurements and/or trends. It automatically makes adjustments to respond to channel failures, compensate for effects of temperature, and maintain calibration.

  18. Metal Whiskers: Failure Modes and Mitigation Strategies

    NASA Technical Reports Server (NTRS)

    Brusse, Jay A.; Leidecker, Henning

    2007-01-01

    Metal coatings especially tin, zinc and cadmium are unpredictably susceptible to the formation of electrically conductive, crystalline filaments referred to as metal whiskers. The use of such coatings in and around electrical systems presents a risk of electrical shorting. Examples of metal whisker formation are shown with emphasis on optical inspection techniques to improve probability of detection. The failure modes (i.e., electrical shorting behavior) associated with metal whiskers are described. Based on an almost 9- year long study, the benefits of polyurethane conformal coat (namely, Arathane 5750) to protect electrical conductors from whisker-induced short circuit anomalies is discussed.

  19. New methods for the condition monitoring of level crossings

    NASA Astrophysics Data System (ADS)

    García Márquez, Fausto Pedro; Pedregal, Diego J.; Roberts, Clive

    2015-04-01

    Level crossings represent a high risk for railway systems. This paper demonstrates the potential to improve maintenance management through the use of intelligent condition monitoring coupled with reliability centred maintenance (RCM). RCM combines advanced electronics, control, computing and communication technologies to address the multiple objectives of cost effectiveness, improved quality, reliability and services. RCM collects digital and analogue signals utilising distributed transducers connected to either point-to-point or digital bus communication links. Assets in many industries use data logging capable of providing post-failure diagnostic support, but to date little use has been made of combined qualitative and quantitative fault detection techniques. The research takes the hydraulic railway level crossing barrier (LCB) system as a case study and develops a generic strategy for failure analysis, data acquisition and incipient fault detection. For each barrier the hydraulic characteristics, the motor's current and voltage, hydraulic pressure and the barrier's position are acquired. In order to acquire the data at a central point efficiently, without errors, a distributed single-cable Fieldbus is utilised. This allows the connection of all sensors through the project's proprietary communication nodes to a high-speed bus. The system developed in this paper for the condition monitoring described above detects faults by means of comparing what can be considered a 'normal' or 'expected' shape of a signal with respect to the actual shape observed as new data become available. ARIMA (autoregressive integrated moving average) models were employed for detecting faults. The statistical tests known as Jarque-Bera and Ljung-Box have been considered for testing the model.

  20. Glandular Lesions of the Cervix in Clinical Practice: A Cytology, Histology, and Human Papillomavirus Correlation Study From 2 Institutions.

    PubMed

    Miller, Ross A; Mody, Dina R; Tams, Kimberlee C; Thrall, Michael J

    2015-11-01

    The Papanicolaou (Pap) test has indisputably decreased cervical cancer mortality, as rates have declined by up to 80% in the United States since its implementation. However, the Pap test is considered less sensitive for detecting glandular lesions than for detecting those of squamous origin. Some studies have even suggested an increasing incidence of cervical adenocarcinoma, which may be a consequence of a relatively reduced ability to detect glandular lesions with cervical cancer screening techniques. To evaluate the detection rate of glandular lesions with screening techniques currently used for cervical cancer screening and to provide insight as to which techniques are most efficacious in our study population. We retrospectively reviewed any available cytology, human papillomavirus (HPV), and histologic malignancy data in patients diagnosed with adenocarcinoma in situ and adenocarcinoma from 2 geographically and socioeconomically disparate hospital systems. Identified patients having had a negative/unsatisfactory Pap test within 5 years of adenocarcinoma in situ or adenocarcinoma tissue diagnosis were considered Pap test screening failures. Patients with negative HPV tests on cytology samples were considered HPV screening failures. One hundred thirty cases were identified (age range, 22-93 years); 39 (30%) had no Pap history in our files. Eight of 91 remaining cases (8.8%) were screening failures. The detected sensitivity for identifying adenocarcinoma in situ/adenocarcinoma in this study was 91.2% by cytology alone and 92.3% when incorporating HPV testing. The most common cytologic diagnosis was atypical glandular cells (25 cases), and those diagnosed with adenocarcinoma were 7.4 years older than those diagnosed with adenocarcinoma in situ (50.3 versus 42.9 years). Nine of 24 HPV-tested cases (37.5%) were called atypical squamous cell of undetermined significance on cytology. Our results highlight the importance of combined Pap and HPV cotesting. Although the number of cases identified is relatively small, our data suggest screening for squamous lesions facilitates the recognition of glandular lesions in the cervix. Additionally, increased use of combined Pap and HPV cotesting may decrease detection failure rates with regard to glandular lesions.

Top