Sample records for fault management capabilities

  1. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  2. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  3. Implementation of Integrated System Fault Management Capability

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark

    2008-01-01

    Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.

  4. Analytical Approaches to Guide SLS Fault Management (FM) Development

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.

    2012-01-01

    Extensive analysis is needed to determine the right set of FM capabilities to provide the most coverage without significantly increasing the cost, reliability (FP/FN), and complexity of the overall vehicle systems. Strong collaboration with the stakeholders is required to support the determination of the best triggers and response options. The SLS Fault Management process has been documented in the Space Launch System Program (SLSP) Fault Management Plan (SLS-PLAN-085).

  5. Real-Time Simulation for Verification and Validation of Diagnostic and Prognostic Algorithms

    NASA Technical Reports Server (NTRS)

    Aguilar, Robet; Luu, Chuong; Santi, Louis M.; Sowers, T. Shane

    2005-01-01

    To verify that a health management system (HMS) performs as expected, a virtual system simulation capability, including interaction with the associated platform or vehicle, very likely will need to be developed. The rationale for developing this capability is discussed and includes the limited capability to seed faults into the actual target system due to the risk of potential damage to high value hardware. The capability envisioned would accurately reproduce the propagation of a fault or failure as observed by sensors located at strategic locations on and around the target system and would also accurately reproduce the control system and vehicle response. In this way, HMS operation can be exercised over a broad range of conditions to verify that it meets requirements for accurate, timely response to actual faults with adequate margin against false and missed detections. An overview is also presented of a real-time rocket propulsion health management system laboratory which is available for future rocket engine programs. The health management elements and approaches of this lab are directly applicable for future space systems. In this paper the various components are discussed and the general fault detection, diagnosis, isolation and the response (FDIR) concept is presented. Additionally, the complexities of V&V (Verification and Validation) for advanced algorithms and the simulation capabilities required to meet the changing state-of-the-art in HMS are discussed.

  6. Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.

    2010-01-01

    Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the findings and recommendations from that workshop, particularly as fault management development issues affect operations and the development of operations capabilities.

  7. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  8. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time intervals to assess active and capable faults for engineering practices in Italy. Eng. Geol., 139/140, 50-65.

  9. NASA Spacecraft Fault Management Workshop Results

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the findings and recommendations from that workshop, as well as opportunities identified for future investment in tools, processes, and products to facilitate the development of space flight fault management capabilities.

  10. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  11. Orion GN&C Fault Management System Verification: Scope And Methodology

    NASA Technical Reports Server (NTRS)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  12. Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz

    2008-01-01

    In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.

  13. Advanced Ground Systems Maintenance Enterprise Architecture Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. Capabilities include anomaly detection, fault isolation, prognostics and physics-based diagnostics.

  14. Intelligent fault diagnosis and failure management of flight control actuation systems

    NASA Technical Reports Server (NTRS)

    Bonnice, William F.; Baker, Walter

    1988-01-01

    The real-time fault diagnosis and failure management (FDFM) of current operational and experimental dual tandem aircraft flight control system actuators was investigated. Dual tandem actuators were studied because of the active FDFM capability required to manage the redundancy of these actuators. The FDFM methods used on current dual tandem actuators were determined by examining six specific actuators. The FDFM capability on these six actuators was also evaluated. One approach for improving the FDFM capability on dual tandem actuators may be through the application of artificial intelligence (AI) technology. Existing AI approaches and applications of FDFM were examined and evaluated. Based on the general survey of AI FDFM approaches, the potential role of AI technology for real-time actuator FDFM was determined. Finally, FDFM and maintainability improvements for dual tandem actuators were recommended.

  15. Advanced Ground Systems Maintenance Enterprise Architecture Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M. (Compiler)

    2015-01-01

    The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.

  16. Health management and controls for earth to orbit propulsion systems

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.

    1992-01-01

    Fault detection and isolation for advanced rocket engine controllers are discussed focusing on advanced sensing systems and software which significantly improve component failure detection for engine safety and health management. Aerojet's Space Transportation Main Engine controller for the National Launch System is the state of the art in fault tolerant engine avionics. Health management systems provide high levels of automated fault coverage and significantly improve vehicle delivered reliability and lower preflight operations costs. Key technologies, including the sensor data validation algorithms and flight capable spectrometers, have been demonstrated in ground applications and are found to be suitable for bridging programs into flight applications.

  17. Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schreckenghost, Debra L.; Woods, David D.; Potter, Scott S.; Johannesen, Leila; Holloway, Matthew; Forbus, Kenneth D.

    1991-01-01

    Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design.

  18. Software Health Management with Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole; Schumann, JOhann

    2011-01-01

    Most modern aircraft as well as other complex machinery is equipped with diagnostics systems for its major subsystems. During operation, sensors provide important information about the subsystem (e.g., the engine) and that information is used to detect and diagnose faults. Most of these systems focus on the monitoring of a mechanical, hydraulic, or electromechanical subsystem of the vehicle or machinery. Only recently, health management systems that monitor software have been developed. In this paper, we will discuss our approach of using Bayesian networks for Software Health Management (SWHM). We will discuss SWHM requirements, which make advanced reasoning capabilities for the detection and diagnosis important. Then we will present our approach to using Bayesian networks for the construction of health models that dynamically monitor a software system and is capable of detecting and diagnosing faults.

  19. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  20. Vehicle Integrated Propulsion Research for the Study of Health Management Capabilities

    NASA Technical Reports Server (NTRS)

    Lekki, John D.; Simon, Donald L.; Hunter, Gary W.; Woike, Mary; Tokars, Roger P.

    2012-01-01

    Presentation on vehicle integrated propulsion research results and planning. This research emphasizes the testing of advanced health management sensors and diagnostics in an aircraft engine that is operated through multiple baseline and fault conditions.

  1. Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Patterson-Hine, Ann

    2003-01-01

    Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.

  2. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  3. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Dunlap, C; Garlick, J

    2002-04-24

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less

  4. An Integrated Architecture for On-Board Aircraft Engine Performance Trend Monitoring and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2010-01-01

    Aircraft engine performance trend monitoring and gas path fault diagnostics are closely related technologies that assist operators in managing the health of their gas turbine engine assets. Trend monitoring is the process of monitoring the gradual performance change that an aircraft engine will naturally incur over time due to turbomachinery deterioration, while gas path diagnostics is the process of detecting and isolating the occurrence of any faults impacting engine flow-path performance. Today, performance trend monitoring and gas path fault diagnostic functions are performed by a combination of on-board and off-board strategies. On-board engine control computers contain logic that monitors for anomalous engine operation in real-time. Off-board ground stations are used to conduct fleet-wide engine trend monitoring and fault diagnostics based on data collected from each engine each flight. Continuing advances in avionics are enabling the migration of portions of the ground-based functionality on-board, giving rise to more sophisticated on-board engine health management capabilities. This paper reviews the conventional engine performance trend monitoring and gas path fault diagnostic architecture commonly applied today, and presents a proposed enhanced on-board architecture for future applications. The enhanced architecture gains real-time access to an expanded quantity of engine parameters, and provides advanced on-board model-based estimation capabilities. The benefits of the enhanced architecture include the real-time continuous monitoring of engine health, the early diagnosis of fault conditions, and the estimation of unmeasured engine performance parameters. A future vision to advance the enhanced architecture is also presented and discussed

  5. Health management and controls for Earth-to-orbit propulsion systems

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.

    1995-03-01

    Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.

  6. Fault Management Techniques in Human Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be determined in order to maintain situational awareness. This allows both automated and manual recovery operations to focus on the real cause of the fault(s). An appropriate balance must be struck between correcting the root cause failure and addressing the impacts of that fault on other vehicle components. Lastly, this paper presents a strategy for using lessons learned to improve the software, displays, and procedures in addition to determining what is a candidate for automation. Enabling technologies and techniques are identified to promote system evolution from one that requires manual fault responses to one that uses automation and autonomy where they are most effective. These considerations include the value in correcting software defects in a timely manner, automation of repetitive tasks, making time critical responses autonomous, etc. The paper recommends the appropriate use of intelligent systems to determine the root causes of faults and correctly identify separate unrelated faults.

  7. Fault Management Technology Maturation for NASA's Constellation Program

    NASA Technical Reports Server (NTRS)

    Waterman, Robert D.

    2010-01-01

    This slide presentation reviews the maturation of fault management technology in preparation for the Constellation Program. There is a review of the Space Shuttle Main Engine (SSME) and a discussion of a couple of incidents with the shuttle main engine and tanking that indicated the necessity for predictive maintenance. Included is a review of the planned Ares I-X Ground Diagnostic Prototype (GDP) and further information about detection and isolation of faults using Testability Engineering and Maintenance System (TEAMS). Another system that being readied for use that detects anomalies, the Inductive Monitoring System (IMS). The IMS automatically learns how the system behaves and alerts operations it the current behavior is anomalous. The comparison of STS-83 and STS-107 (i.e., the Columbia accident) is shown as an example of the anomaly detection capabilities.

  8. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  9. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less

  10. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  11. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.

  12. Online Monitoring Technical Basis and Analysis Framework for Emergency Diesel Generators - Interim Report for FY 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binh T. Pham; Nancy J. Lybeck; Vivek Agarwal

    The Light Water Reactor Sustainability program at Idaho National Laboratory is actively conducting research to develop and demonstrate online monitoring capabilities for active components in existing nuclear power plants. Idaho National Laboratory and the Electric Power Research Institute are working jointly to implement a pilot project to apply these capabilities to emergency diesel generators and generator step-up transformers. The Electric Power Research Institute Fleet-Wide Prognostic and Health Management Software Suite will be used to implement monitoring in conjunction with utility partners: Braidwood Generating Station (owned by Exelon Corporation) for emergency diesel generators, and Shearon Harris Nuclear Generating Station (owned bymore » Duke Energy Progress) for generator step-up transformers. This report presents monitoring techniques, fault signatures, and diagnostic and prognostic models for emergency diesel generators. Emergency diesel generators provide backup power to the nuclear power plant, allowing operation of essential equipment such as pumps in the emergency core coolant system during catastrophic events, including loss of offsite power. Technical experts from Braidwood are assisting Idaho National Laboratory and Electric Power Research Institute in identifying critical faults and defining fault signatures associated with each fault. The resulting diagnostic models will be implemented in the Fleet-Wide Prognostic and Health Management Software Suite and tested using data from Braidwood. Parallel research on generator step-up transformers was summarized in an interim report during the fourth quarter of fiscal year 2012.« less

  13. A fuzzy logic intelligent diagnostic system for spacecraft integrated vehicle health management

    NASA Technical Reports Server (NTRS)

    Wu, G. Gordon

    1995-01-01

    Due to the complexity of future space missions and the large amount of data involved, greater autonomy in data processing is demanded for mission operations, training, and vehicle health management. In this paper, we develop a fuzzy logic intelligent diagnostic system to perform data reduction, data analysis, and fault diagnosis for spacecraft vehicle health management applications. The diagnostic system contains a data filter and an inference engine. The data filter is designed to intelligently select only the necessary data for analysis, while the inference engine is designed for failure detection, warning, and decision on corrective actions using fuzzy logic synthesis. Due to its adaptive nature and on-line learning ability, the diagnostic system is capable of dealing with environmental noise, uncertainties, conflict information, and sensor faults.

  14. Augmentation of the space station module power management and distribution breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Hall, David K.; Lollar, Louis F.

    1991-01-01

    The space station module power management and distribution (SSM/PMAD) breadboard models power distribution and management, including scheduling, load prioritization, and a fault detection, identification, and recovery (FDIR) system within a Space Station Freedom habitation or laboratory module. This 120 VDC system is capable of distributing up to 30 kW of power among more than 25 loads. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level consists of fast, simple (from a computing standpoint) switchgear that is capable of quickly safing the system. At the next level are local load center processors, (LLP's) which execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. Above the LLP's are three cooperating artificial intelligence (AI) systems which manage load prioritizations, load scheduling, load shedding, and fault recovery and management. Recent upgrades to hardware and modifications to software at both the LLP and AI system levels promise a drastic increase in speed, a significant increase in functionality and reliability, and potential for further examination of advanced automation techniques. The background, SSM/PMAD, interface to the Lewis Research Center test bed, the large autonomous spacecraft electrical power system, and future plans are discussed.

  15. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  16. Modeling Off-Nominal Behavior in SysML

    NASA Technical Reports Server (NTRS)

    Day, John; Donahue, Kenny; Ingham, Mitch; Kadesch, Alex; Kennedy, Kit; Post, Ethan

    2012-01-01

    Fault Management is an essential part of the system engineering process that is limited in its effectiveness by the ad hoc nature of the applied approaches and methods. Providing a rigorous way to develop and describe off-nominal behavior is a necessary step in the improvement of fault management, and as a result, will enable safe, reliable and available systems even as system complexity increases... The basic concepts described in this paper provide a foundation to build a larger set of necessary concepts and relationships for precise modeling of off-nominal behavior, and a basis for incorporating these ideas into the overall systems engineering process.. The simple FMEA example provided applies the modeling patterns we have developed and illustrates how the information in the model can be used to reason about the system and derive typical fault management artifacts.. A key insight from the FMEA work was the utility of defining failure modes as the "inverse of intent", and deriving this from the behavior models.. Additional work is planned to extend these ideas and capabilities to other types of relevant information and additional products.

  17. Improved fault ride through capability of DFIG based wind turbines using synchronous reference frame control based dynamic voltage restorer.

    PubMed

    Rini Ann Jerin, A; Kaliannan, Palanisamy; Subramaniam, Umashankar

    2017-09-01

    Fault ride through (FRT) capability in wind turbines to maintain the grid stability during faults has become mandatory with the increasing grid penetration of wind energy. Doubly fed induction generator based wind turbine (DFIG-WT) is the most popularly utilized type of generator but highly susceptible to the voltage disturbances in grid. Dynamic voltage restorer (DVR) based external FRT capability improvement is considered. Since DVR is capable of providing fast voltage sag mitigation during faults and can maintain the nominal operating conditions for DFIG-WT. The effectiveness of the DVR using Synchronous reference frame (SRF) control is investigated for FRT capability in DFIG-WT during both balanced and unbalanced fault conditions. The operation of DVR is confirmed using time-domain simulation in MATLAB/Simulink using 1.5MW DFIG-WT. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Hardware fault insertion and instrumentation system: Mechanization and validation

    NASA Technical Reports Server (NTRS)

    Benson, J. W.

    1987-01-01

    Automated test capability for extensive low-level hardware fault insertion testing is developed. The test capability is used to calibrate fault detection coverage and associated latency times as relevant to projecting overall system reliability. Described are modifications made to the NASA Ames Reconfigurable Flight Control System (RDFCS) Facility to fully automate the total test loop involving the Draper Laboratories' Fault Injector Unit. The automated capability provided included the application of sequences of simulated low-level hardware faults, the precise measurement of fault latency times, the identification of fault symptoms, and bulk storage of test case results. A PDP-11/60 served as a test coordinator, and a PDP-11/04 as an instrumentation device. The fault injector was controlled by applications test software in the PDP-11/60, rather than by manual commands from a terminal keyboard. The time base was especially developed for this application to use a variety of signal sources in the system simulator.

  19. Fault tolerant and lifetime control architecture for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Chen, Yi-Liang; Sundareswaran, Venkataraman; Altshuler, Thomas

    2008-04-01

    Increased vehicle autonomy, survivability and utility can provide an unprecedented impact on mission success and are one of the most desirable improvements for modern autonomous vehicles. We propose a general architecture of intelligent resource allocation, reconfigurable control and system restructuring for autonomous vehicles. The architecture is based on fault-tolerant control and lifetime prediction principles, and it provides improved vehicle survivability, extended service intervals, greater operational autonomy through lower rate of time-critical mission failures and lesser dependence on supplies and maintenance. The architecture enables mission distribution, adaptation and execution constrained on vehicle and payload faults and desirable lifetime. The proposed architecture will allow managing missions more efficiently by weighing vehicle capabilities versus mission objectives and replacing the vehicle only when it is necessary.

  20. Meeting the Challenges of Exploration Systems: Health Management Technologies for Aerospace Systems With Emphasis on Propulsion

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Sowers, T. Shane; Maul, William A.

    2005-01-01

    The constraints of future Exploration Missions will require unique Integrated System Health Management (ISHM) capabilities throughout the mission. An ambitious launch schedule, human-rating requirements, long quiescent periods, limited human access for repair or replacement, and long communication delays all require an ISHM system that can span distinct yet interdependent vehicle subsystems, anticipate failure states, provide autonomous remediation, and support the Exploration Mission from beginning to end. NASA Glenn Research Center has developed and applied health management system technologies to aerospace propulsion systems for almost two decades. Lessons learned from past activities help define the approach to proper ISHM development: sensor selection- identifies sensor sets required for accurate health assessment; data qualification and validation-ensures the integrity of measurement data from sensor to data system; fault detection and isolation-uses measurements in a component/subsystem context to detect faults and identify their point of origin; information fusion and diagnostic decision criteria-aligns data from similar and disparate sources in time and use that data to perform higher-level system diagnosis; and verification and validation-uses data, real or simulated, to provide variable exposure to the diagnostic system for faults that may only manifest themselves in actual implementation, as well as faults that are detectable via hardware testing. This presentation describes a framework for developing health management systems and highlights the health management research activities performed by the Controls and Dynamics Branch at the NASA Glenn Research Center. It illustrates how those activities contribute to the development of solutions for Integrated System Health Management.

  1. Emerging technologies for V&V of ISHM software for space exploration

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Markosian, Lawrence Z.

    2006-01-01

    Systems1,2 required to exhibit high operational reliability often rely on some form of fault protection to recognize and respond to faults, preventing faults' escalation to catastrophic failures. Integrated System Health Management (ISHM) extends the functionality of fault protection to both scale to more complex systems (and systems of systems), and to maintain capability rather than just avert catastrophe. Forms of ISHM have been utilized to good effect in the maintenance phase of systems' total lifecycles (often referred to as 'condition-based mainte-nance'), but less so in a 'fault protection' role during actual operations. One of the impediments to such use lies in the challenges of verification, validation and certification of ISHM systems themselves. This paper makes the case that state-of-the-practice V&V and certification techniques will not suffice for emerging forms of ISHM systems; however, a number of maturing software engineering assurance technologies show particular promise for addressing these ISHM V&V challenges.

  2. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.

    1990-01-01

    The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.

  3. Resilience by Design: Bringing Science to Policy Makers

    USGS Publications Warehouse

    Jones, Lucile M.

    2015-01-01

    No one questions that Los Angeles has an earthquake problem. The “Big Bend” of the San Andreas fault in southern California complicates the plate boundary between the North American and Pacific plates, creating a convergent component to the primarily transform boundary. The Southern California Earthquake Center Community Fault Model has over 150 fault segments, each capable of generating a damaging earthquake, in an area with more than 23 million residents (Fig. 1). A Federal Emergency Management Agency (FEMA) analysis of the expected losses from all future earthquakes in the National Seismic Hazard Maps (Petersen et al., 2014) predicts an annual average of more than $3 billion per year in the eight counties of southern California, with half of those losses in Los Angeles County alone (Federal Emergency Management Agency [FEMA], 2008). According to Swiss Re, one of the world’s largest reinsurance companies, Los Angeles faces one of the greatest risks of catastrophic losses from earthquakes of any city in the world, eclipsed only by Tokyo, Jakarta, and Manila (Swiss Re, 2013).

  4. Autonomous power expert system advanced development

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Walters, Jerry L.

    1991-01-01

    The autonomous power expert (APEX) system is being developed at Lewis Research Center to function as a fault diagnosis advisor for a space power distribution test bed. APEX is a rule-based system capable of detecting faults and isolating the probable causes. APEX also has a justification facility to provide natural language explanations about conclusions reached during fault isolation. To help maintain the health of the power distribution system, additional capabilities were added to APEX. These capabilities allow detection and isolation of incipient faults and enable the expert system to recommend actions/procedure to correct the suspected fault conditions. New capabilities for incipient fault detection consist of storage and analysis of historical data and new user interface displays. After the cause of a fault is determined, appropriate recommended actions are selected by rule-based inferencing which provides corrective/extended test procedures. Color graphics displays and improved mouse-selectable menus were also added to provide a friendlier user interface. A discussion of APEX in general and a more detailed description of the incipient detection, recommended actions, and user interface developments during the last year are presented.

  5. Fault Detection, Isolation and Recovery (FDIR) Portable Liquid Oxygen Hardware Demonstrator

    NASA Technical Reports Server (NTRS)

    Oostdyk, Rebecca L.; Perotti, Jose M.

    2011-01-01

    The Fault Detection, Isolation and Recovery (FDIR) hardware demonstration will highlight the effort being conducted by Constellation's Ground Operations (GO) to provide the Launch Control System (LCS) with system-level health management during vehicle processing and countdown activities. A proof-of-concept demonstration of the FDIR prototype established the capability of the software to provide real-time fault detection and isolation using generated Liquid Hydrogen data. The FDIR portable testbed unit (presented here) aims to enhance FDIR by providing a dynamic simulation of Constellation subsystems that feed the FDIR software live data based on Liquid Oxygen system properties. The LO2 cryogenic ground system has key properties that are analogous to the properties of an electronic circuit. The LO2 system is modeled using electrical components and an equivalent circuit is designed on a printed circuit board to simulate the live data. The portable testbed is also be equipped with data acquisition and communication hardware to relay the measurements to the FDIR application running on a PC. This portable testbed is an ideal capability to perform FDIR software testing, troubleshooting, training among others.

  6. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  7. Model-based diagnostics for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Martin, Eric R.; Lerutte, Marcel G.

    1991-01-01

    An innovative approach to fault management was recently demonstrated for the NASA LeRC Space Station Freedom (SSF) power system testbed. This project capitalized on research in model-based reasoning, which uses knowledge of a system's behavior to monitor its health. The fault management system (FMS) can isolate failures online, or in a post analysis mode, and requires no knowledge of failure symptoms to perform its diagnostics. An in-house tool called MARPLE was used to develop and run the FMS. MARPLE's capabilities are similar to those available from commercial expert system shells, although MARPLE is designed to build model-based as opposed to rule-based systems. These capabilities include functions for capturing behavioral knowledge, a reasoning engine that implements a model-based technique known as constraint suspension, and a tool for quickly generating new user interfaces. The prototype produced by applying MARPLE to SSF not only demonstrated that model-based reasoning is a valuable diagnostic approach, but it also suggested several new applications of MARPLE, including an integration and testing aid, and a complement to state estimation.

  8. SUMC fault tolerant computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

  9. Real-time automated failure identification in the Control Center Complex (CCC)

    NASA Technical Reports Server (NTRS)

    Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James

    1993-01-01

    A system which will provide real-time failure management support to the Space Station Freedom program is described. The system's use of a simplified form of model based reasoning qualifies it as an advanced automation system. However, it differs from most such systems in that it was designed from the outset to meet two sets of requirements. First, it must provide a useful increment to the fault management capabilities of the Johnson Space Center (JSC) Control Center Complex (CCC) Fault Detection Management system. Second, it must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation, etc. The need to meet both requirement sets presents a much greater design challenge than would have been the case had functionality been the sole design consideration. The choice of technology, discussing aspects of that choice and the process for migrating it into the control center is overviewed.

  10. Model-based diagnosis through Structural Analysis and Causal Computation for automotive Polymer Electrolyte Membrane Fuel Cell systems

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare

    2017-07-01

    The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.

  11. Spacecraft fault tolerance: The Magellan experience

    NASA Technical Reports Server (NTRS)

    Kasuda, Rick; Packard, Donna Sexton

    1993-01-01

    Interplanetary and earth orbiting missions are now imposing unique fault tolerant requirements upon spacecraft design. Mission success is the prime motivator for building spacecraft with fault tolerant systems. The Magellan spacecraft had many such requirements imposed upon its design. Magellan met these requirements by building redundancy into all the major subsystem components and designing the onboard hardware and software with the capability to detect a fault, isolate it to a component, and issue commands to achieve a back-up configuration. This discussion is limited to fault protection, which is the autonomous capability to respond to a fault. The Magellan fault protection design is discussed, as well as the developmental and flight experiences and a summary of the lessons learned.

  12. Data Aggregation in Multi-Agent Systems in the Presence of Hybrid Faults

    ERIC Educational Resources Information Center

    Srinivasan, Satish Mahadevan

    2010-01-01

    Data Aggregation (DA) is a set of functions that provide components of a distributed system access to global information for purposes of network management and user services. With the diverse new capabilities that networks can provide, applicability of DA is growing. DA is useful in dealing with multi-value domain information and often requires…

  13. ROBUS-2: A Fault-Tolerant Broadcast Communication System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.

    2005-01-01

    The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault-tolerant integrated modular architecture currently under development at NASA Langley Research Center. The ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, clock synchronization, and distributed diagnosis (group membership). The ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 is tolerant to internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. This version of the ROBUS is intended for laboratory experimentation and demonstrations of the capability to reintegrate failed nodes, dynamically update the communication schedule, and tolerate and recover from correlated transient faults.

  14. Planning and Resource Management in an Intelligent Automated Power Management System

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1991-01-01

    Power system management is a process of guiding a power system towards the objective of continuous supply of electrical power to a set of loads. Spacecraft power system management requires planning and scheduling, since electrical power is a scarce resource in space. The automation of power system management for future spacecraft has been recognized as an important R&D goal. Several automation technologies have emerged including the use of expert systems for automating human problem solving capabilities such as rule based expert system for fault diagnosis and load scheduling. It is questionable whether current generation expert system technology is applicable for power system management in space. The objective of the ADEPTS (ADvanced Electrical Power management Techniques for Space systems) is to study new techniques for power management automation. These techniques involve integrating current expert system technology with that of parallel and distributed computing, as well as a distributed, object-oriented approach to software design. The focus of the current study is the integration of new procedures for automatically planning and scheduling loads with procedures for performing fault diagnosis and control. The objective is the concurrent execution of both sets of tasks on separate transputer processors, thus adding parallelism to the overall management process.

  15. Autonomous Satellite Command and Control through the World Wide Web: Phase 3

    NASA Technical Reports Server (NTRS)

    Cantwell, Brian; Twiggs, Robert

    1998-01-01

    NASA's New Millenium Program (NMP) has identified a variety of revolutionary technologies that will support orders of magnitude improvements in the capabilities of spacecraft missions. This program's Autonomy team has focused on science and engineering automation technologies. In doing so, it has established a clear development roadmap specifying the experiments and demonstrations required to mature these technologies. The primary developmental thrusts of this roadmap are in the areas of remote agents, PI/operator interface, planning/scheduling fault management, and smart execution architectures. Phases 1 and 2 of the ASSET Project (previously known as the WebSat project) have focused on establishing World Wide Web-based commanding and telemetry services as an advanced means of interfacing a spacecraft system with the PI and operators. Current automated capabilities include Web-based command submission, limited contact scheduling, command list generation and transfer to the ground station, spacecraft support for demonstrations experiments, data transfer from the ground station back to the ASSET system, data archiving, and Web-based telemetry distribution. Phase 2 was finished in December 1996. During January-December 1997 work was commenced on Phase 3 of the ASSET Project. Phase 3 is the subject of this report. This phase permitted SSDL and its project partners to expand the ASSET system in a variety of ways. These added capabilities included the advancement of ground station capabilities, the adaptation of spacecraft on-board software, and the expansion of capabilities of the ASSET management algorithms. Specific goals of Phase 3 were: (1) Extend Web-based goal-level commanding for both the payload PI and the spacecraft engineer; (2) Support prioritized handling of multiple PIs as well as associated payload experimenters; (3) Expand the number and types of experiments supported by the ASSET system and its associated spacecraft; (4) Implement more advanced resource management, modeling and fault management capabilities that integrate the space and ground segments of the space system hardware; (5) Implement a beacon monitoring test; (6) Implement an experimental blackboard controller for space system management; (7) Further define typical ground station developments required for Internet-based remote control and for full system automation of the PI-to-spacecraft link. Each of those goals is examined in the next section. Significant sections of this report were also published as a conference paper.

  16. The shallow boreholes at The AltotiBerina near fault Observatory (TABOO; northern Apennines of Italy)

    NASA Astrophysics Data System (ADS)

    Chiaraluce, L.; Collettini, C.; Cattaneo, M.; Monachesi, G.

    2014-04-01

    As part of an interdisciplinary research project, funded by the European Research Council and addressing the mechanics of weak faults, we drilled three 200-250 m-deep boreholes and installed an array of seismometers. The array augments TABOO (The AltotiBerina near fault ObservatOry), a scientific infrastructure managed by the Italian National Institute of Geophysics and Volcanology. The observatory, which consists of a geophysical network equipped with multi-sensor stations, is located in the northern Apennines (Italy) and monitors a large and active low-angle normal fault. The drilling operations started at the end of 2011 and were completed by July 2012. We instrumented the boreholes with three-component short-period (2 Hz) passive instruments at different depths. The seismometers are now fully operational and collecting waveforms characterised by a very high signal to noise ratio that is ideal for studying microearthquakes. The resulting increase in the detection capability of the seismic network will allow for a broader range of transients to be identified.

  17. Design of Test Articles and Monitoring System for the Characterization of HIRF Effects on a Fault-Tolerant Computer Communication System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.; Koppen, Sandra V.

    2008-01-01

    This report describes the design of the test articles and monitoring systems developed to characterize the response of a fault-tolerant computer communication system when stressed beyond the theoretical limits for guaranteed correct performance. A high-intensity radiated electromagnetic field (HIRF) environment was selected as the means of injecting faults, as such environments are known to have the potential to cause arbitrary and coincident common-mode fault manifestations that can overwhelm redundancy management mechanisms. The monitors generate stimuli for the systems-under-test (SUTs) and collect data in real-time on the internal state and the response at the external interfaces. A real-time health assessment capability was developed to support the automation of the test. A detailed description of the nature and structure of the collected data is included. The goal of the report is to provide insight into the design and operation of these systems, and to serve as a reference document for use in post-test analyses.

  18. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  19. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to provide formal connectivity between the nominal (SE), and off-nominal (SHM and FM) aspects of functions and designs. This paper describes a formal modeling approach to the initial phases of the development process that integrates the nominal and off-nominal perspectives in a model that unites SE goals and functions of with the failure to achieve goals and functions (SHM/FM).

  20. Fault Management in an Objectives-Based/Risk-Informed View of Safety and Mission Success

    NASA Technical Reports Server (NTRS)

    Groen, Frank

    2012-01-01

    Theme of this talk: (1) Net-benefit of activities and decisions derives from objectives (and their priority) -- similarly: need for integration, value of technology/capability. (2) Risk is a lack of confidence that objectives will be met. (2a) Risk-informed decision making requires objectives. (3) Consideration of objectives is central to recent guidance.

  1. Advanced data management system architectures testbed

    NASA Technical Reports Server (NTRS)

    Grant, Terry

    1990-01-01

    The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.

  2. A hierarchical distributed control model for coordinating intelligent systems

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1991-01-01

    A hierarchical distributed control (HDC) model for coordinating cooperative problem-solving among intelligent systems is described. The model was implemented using SOCIAL, an innovative object-oriented tool for integrating heterogeneous, distributed software systems. SOCIAL embeds applications in 'wrapper' objects called Agents, which supply predefined capabilities for distributed communication, control, data specification, and translation. The HDC model is realized in SOCIAL as a 'Manager'Agent that coordinates interactions among application Agents. The HDC Manager: indexes the capabilities of application Agents; routes request messages to suitable server Agents; and stores results in a commonly accessible 'Bulletin-Board'. This centralized control model is illustrated in a fault diagnosis application for launch operations support of the Space Shuttle fleet at NASA, Kennedy Space Center.

  3. Control model design to limit DC-link voltage during grid fault in a dfig variable speed wind turbine

    NASA Astrophysics Data System (ADS)

    Nwosu, Cajethan M.; Ogbuka, Cosmas U.; Oti, Stephen E.

    2017-08-01

    This paper presents a control model design capable of inhibiting the phenomenal rise in the DC-link voltage during grid- fault condition in a variable speed wind turbine. Against the use of power circuit protection strategies with inherent limitations in fault ride-through capability, a control circuit algorithm capable of limiting the DC-link voltage rise which in turn bears dynamics that has direct influence on the characteristics of the rotor voltage especially during grid faults is here proposed. The model results so obtained compare favorably with the simulation results as obtained in a MATLAB/SIMULINK environment. The generated model may therefore be used to predict near accurately the nature of DC-link voltage variations during fault given some factors which include speed and speed mode of operation, the value of damping resistor relative to half the product of inner loop current control bandwidth and the filter inductance.

  4. Active, capable, and potentially active faults - a paleoseismic perspective

    USGS Publications Warehouse

    Machette, M.N.

    2000-01-01

    Maps of faults (geologically defined source zones) may portray seismic hazards in a wide range of completeness depending on which types of faults are shown. Three fault terms - active, capable, and potential - are used in a variety of ways for different reasons or applications. Nevertheless, to be useful for seismic-hazards analysis, fault maps should encompass a time interval that includes several earthquake cycles. For example, if the common recurrence in an area is 20,000-50,000 years, then maps should include faults that are 50,000-100,000 years old (two to five typical earthquake cycles), thus allowing for temporal variability in slip rate and recurrence intervals. Conversely, in more active areas such as plate boundaries, maps showing faults that are <10,000 years old should include those with at least 2 to as many as 20 paleoearthquakes. For the International Lithosphere Programs' Task Group II-2 Project on Major Active Faults of the World our maps and database will show five age categories and four slip rate categories that allow one to select differing time spans and activity rates for seismic-hazard analysis depending on tectonic regime. The maps are accompanied by a database that describes evidence for Quaternary faulting, geomorphic expression, and paleoseismic parameters (slip rate, recurrence interval and time of most recent surface faulting). These maps and databases provide an inventory of faults that would be defined as active, capable, and potentially active for seismic-hazard assessments.

  5. A fail-safe CMOS logic gate

    NASA Technical Reports Server (NTRS)

    Bobin, V.; Whitaker, S.

    1990-01-01

    This paper reports a design technique to make Complex CMOS Gates fail-safe for a class of faults. Two classes of faults are defined. The fail-safe design presented has limited fault-tolerance capability. Multiple faults are also covered.

  6. Testing HyDE on ADAPT

    NASA Technical Reports Server (NTRS)

    Sweet, Adam

    2008-01-01

    The IVHM Project in the Aviation Safety Program has funded research in electrical power system (EPS) health management. This problem domain contains both discrete and continuous behavior, and thus is directly relevant for the hybrid diagnostic tool HyDE. In FY2007 work was performed to expand the HyDE diagnosis model of the ADAPT system. The work completed resulted in a HyDE model with the capability to diagnose five times the number of ADAPT components previously tested. The expanded diagnosis model passed a corresponding set of new ADAPT fault injection scenario tests with no incorrect faults reported. The time required for the HyDE diagnostic system to isolate the fault varied widely between tests; this variance was reduced by tuning HyDE input parameters. These results and other diagnostic design trade-offs are discussed. Finally, possible future improvements for both the HyDE diagnostic model and HyDE itself are presented.

  7. Fault Analysis in a Grid Integrated DFIG Based Wind Energy System with NA CB_P Circuit for Ridethrough Capability and Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Swain, Snehaprava; Ray, Pravat Kumar

    2016-12-01

    In this paper a three phase fault analysis is done on a DFIG based grid integrated wind energy system. A Novel Active Crowbar Protection (NACB_P) system is proposed to enhance the Fault-ride through (FRT) capability of DFIG both for symmetrical as well as unsymmetrical grid faults. Hence improves the power quality of the system. The protection scheme proposed here is designed with a capacitor in series with the resistor unlike the conventional Crowbar (CB) having only resistors. The major function of the capacitor in the protection circuit is to eliminate the ripples generated in the rotor current and to protect the converter as well as the DC-link capacitor. It also compensates reactive power required by the DFIG during fault. Due to these advantages the proposed scheme enhances the FRT capability of the DFIG and also improves the power quality of the whole system. Experimentally the fault analysis is done on a 3hp slip ring induction generator and simulation results are carried out on a 1.7 MVA DFIG based WECS under different types of grid faults in MATLAB/Simulation and functionality of the proposed scheme is verified.

  8. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  9. Extremely Intensive and Conservative Fault Capability Studies on Nuclear Facilities in Japan after the 2011 Tohoku Earthquake and Fukushima Daiichi Incident

    NASA Astrophysics Data System (ADS)

    Okumura, K.

    2013-12-01

    Rocks of the Japanese islands are mostly faulted since the Mesozoic Era. The opening of the Sea of Japan in Middle Miocene stretched most of the Japanese crust together with rifting systems. Modern compressional tectonic regime started in Pliocene and accelerated during Quaternary. The ubiquitous bedrock fault prior to the Quaternary had long been regarded as incapable for the future rupturing. This view on the bedrock fault, however, is in question after the March 11, 2011 Tohoku earthquake and tsunamis. There is no scientific reason for the Tohoku earthquake to let the geologists and seismologists worry about the capability of the long-deceased fault. Neither the unexpected April 11, 2011 extensional faulting event on shore in southern Fukushima prefecture has any scientific reason as well. There was no change and no new stress field, but the psychological situation of the scientists and the public welcomed the wrong belief in unexpected stress changes all over Japan, in the same manner that the March 11 M 9 was not expected. Finally, the capabilities of the bedrock faults, fractures, and joints came up to concern about seismic safety of nuclear facilities. After the incidents, the nuclear regulation authority of Japan began reevaluation of the seismic safety of all facilities in Japan. The primary issues of the reevaluation were conjunctive multi-fault mega-earthquakes and the capabilities of the bedrock faults, precisely reflecting the Tohoku events. The former does not require immediate abandonment of a facility. However, the latter now denies any chance of continued operation. It is because of the new (July 2013) safety guide gave top priority to the capability of the displacement under a facility for the evaluation on safe operation. The guide also requires utmost deterministic manner in very conservative ways. The regulators ordered the utility companies to thoroughly examine the capability for several sites, and started review of the studies in late 2012. Many of the Japanese critical nuclear facilities are built on bedrocks with faults, fractures, and joints. They were not regarded as capable when the facilities were built in 1970's to 1990's. In many cases it was not possible to know about Late Pleistocene movement owing to the lack of young sediments on bedrocks. In a few cases, geologist studied past movement and found nothing. Some very cautious researchers on nuclear safety overturned previous evaluation easily. The capability studies by the utility companies then became very serious. The young sediments that may indicate the timing of faulting were completely removed during construction. Within bedrock, it is almost impossible to demonstrate that there was no recent displacement. The regulators are very rigid and relentless to require perfect evidence of incapability. Now several utility companies are opening huge trenches, digging beside a reactor, or drilling many cores from bedrock in the site spending billions of Yen. The results of extremely intensive studies brought a lot of information on the geologic structures and their capabilities. This paper will summarize the scientific finding and their meaning on the seismic safety of critical nuclear facilities.

  10. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  11. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    PubMed

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  12. Fault Injection and Monitoring Capability for a Fault-Tolerant Distributed Computation System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Yates, Amy M.; Malekpour, Mahyar R.

    2010-01-01

    The Configurable Fault-Injection and Monitoring System (CFIMS) is intended for the experimental characterization of effects caused by a variety of adverse conditions on a distributed computation system running flight control applications. A product of research collaboration between NASA Langley Research Center and Old Dominion University, the CFIMS is the main research tool for generating actual fault response data with which to develop and validate analytical performance models and design methodologies for the mitigation of fault effects in distributed flight control systems. Rather than a fixed design solution, the CFIMS is a flexible system that enables the systematic exploration of the problem space and can be adapted to meet the evolving needs of the research. The CFIMS has the capabilities of system-under-test (SUT) functional stimulus generation, fault injection and state monitoring, all of which are supported by a configuration capability for setting up the system as desired for a particular experiment. This report summarizes the work accomplished so far in the development of the CFIMS concept and documents the first design realization.

  13. AGSM Functional Fault Models for Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  14. Modeling of a latent fault detector in a digital system

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.

    1978-01-01

    Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.

  15. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  16. Putting Integrated Systems Health Management Capabilities to Work: Development of an Advanced Caution and Warning System for Next-Generation Crewed Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Mccann, Robert S.; Spirkovska, Lilly; Smith, Irene

    2013-01-01

    Integrated System Health Management (ISHM) technologies have advanced to the point where they can provide significant automated assistance with real-time fault detection, diagnosis, guided troubleshooting, and failure consequence assessment. To exploit these capabilities in actual operational environments, however, ISHM information must be integrated into operational concepts and associated information displays in ways that enable human operators to process and understand the ISHM system information rapidly and effectively. In this paper, we explore these design issues in the context of an advanced caution and warning system (ACAWS) for next-generation crewed spacecraft missions. User interface concepts for depicting failure diagnoses, failure effects, redundancy loss, "what-if" failure analysis scenarios, and resolution of ambiguity groups are discussed and illustrated.

  17. Artificial neural network application for space station power system fault diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Oliver, Walter E.; Dias, Lakshman G.

    1995-01-01

    This study presents a methodology for fault diagnosis using a Two-Stage Artificial Neural Network Clustering Algorithm. Previously, SPICE models of a 5-bus DC power distribution system with assumed constant output power during contingencies from the DDCU were used to evaluate the ANN's fault diagnosis capabilities. This on-going study uses EMTP models of the components (distribution lines, SPDU, TPDU, loads) and power sources (DDCU) of Space Station Alpha's electrical Power Distribution System as a basis for the ANN fault diagnostic tool. The results from the two studies are contrasted. In the event of a major fault, ground controllers need the ability to identify the type of fault, isolate the fault to the orbital replaceable unit level and provide the necessary information for the power management expert system to optimally determine a degraded-mode load schedule. To accomplish these goals, the electrical power distribution system's architecture can be subdivided into three major classes: DC-DC converter to loads, DC Switching Unit (DCSU) to Main bus Switching Unit (MBSU), and Power Sources to DCSU. Each class which has its own electrical characteristics and operations, requires a unique fault analysis philosophy. This study identifies these philosophies as Riddles 1, 2 and 3 respectively. The results of the on-going study addresses Riddle-1. It is concluded in this study that the combination of the EMTP models of the DDCU, distribution cables and electrical loads yields a more accurate model of the behavior and in addition yielded more accurate fault diagnosis using ANN versus the results obtained with the SPICE models.

  18. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  19. Real-Time Rocket/Vehicle System Integrated Health Management Laboratory For Development and Testing of Health Monitoring/Management Systems

    NASA Technical Reports Server (NTRS)

    Aguilar, R.

    2006-01-01

    Pratt & Whitney Rocketdyne has developed a real-time engine/vehicle system integrated health management laboratory, or testbed, for developing and testing health management system concepts. This laboratory simulates components of an integrated system such as the rocket engine, rocket engine controller, vehicle or test controller, as well as a health management computer on separate general purpose computers. These general purpose computers can be replaced with more realistic components such as actual electronic controllers and valve actuators for hardware-in-the-loop simulation. Various engine configurations and propellant combinations are available. Fault or failure insertion capability on-the-fly using direct memory insertion from a user console is used to test system detection and response. The laboratory is currently capable of simulating the flow-path of a single rocket engine but work is underway to include structural and multiengine simulation capability as well as a dedicated data acquisition system. The ultimate goal is to simulate as accurately and realistically as possible the environment in which the health management system will operate including noise, dynamic response of the engine/engine controller, sensor time delays, and asynchronous operation of the various components. The rationale for the laboratory is also discussed including limited alternatives for demonstrating the effectiveness and safety of a flight system.

  20. Prognostic and health management of active assets in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Lybeck, Nancy; Pham, Binh T.

    This study presents the development of diagnostic and prognostic capabilities for active assets in nuclear power plants (NPPs). The research was performed under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program. Idaho National Laboratory researched, developed, implemented, and demonstrated diagnostic and prognostic models for generator step-up transformers (GSUs). The Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software developed by the Electric Power Research Institute was used to perform diagnosis and prognosis. As part of the research activity, Idaho National Laboratory implemented 22 GSU diagnostic models in the Asset Fault Signature Database and twomore » wellestablished GSU prognostic models for the paper winding insulation in the Remaining Useful Life Database of the FW-PHM Suite. The implemented models along with a simulated fault data stream were used to evaluate the diagnostic and prognostic capabilities of the FW-PHM Suite. Knowledge of the operating condition of plant asset gained from diagnosis and prognosis is critical for the safe, productive, and economical long-term operation of the current fleet of NPPs. This research addresses some of the gaps in the current state of technology development and enables effective application of diagnostics and prognostics to nuclear plant assets.« less

  1. Prognostic and health management of active assets in nuclear power plants

    DOE PAGES

    Agarwal, Vivek; Lybeck, Nancy; Pham, Binh T.; ...

    2015-06-04

    This study presents the development of diagnostic and prognostic capabilities for active assets in nuclear power plants (NPPs). The research was performed under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program. Idaho National Laboratory researched, developed, implemented, and demonstrated diagnostic and prognostic models for generator step-up transformers (GSUs). The Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software developed by the Electric Power Research Institute was used to perform diagnosis and prognosis. As part of the research activity, Idaho National Laboratory implemented 22 GSU diagnostic models in the Asset Fault Signature Database and twomore » wellestablished GSU prognostic models for the paper winding insulation in the Remaining Useful Life Database of the FW-PHM Suite. The implemented models along with a simulated fault data stream were used to evaluate the diagnostic and prognostic capabilities of the FW-PHM Suite. Knowledge of the operating condition of plant asset gained from diagnosis and prognosis is critical for the safe, productive, and economical long-term operation of the current fleet of NPPs. This research addresses some of the gaps in the current state of technology development and enables effective application of diagnostics and prognostics to nuclear plant assets.« less

  2. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  3. Development of a Converter-Based Transmission Line Emulator with Three-Phase Short-Circuit Fault Emulation Capability

    DOE PAGES

    Zhang, Shuoting; Liu, Bo; Zheng, Sheng; ...

    2018-01-01

    A transmission line emulator has been developed to flexibly represent interconnected ac lines under normal operating conditions in a voltage source converter (VSC)-based power system emulation platform. As the most serious short-circuit fault condition, the three-phase short-circuit fault emulation is essential for power system studies. Here, this paper proposes a model to realize a three-phase short-circuit fault emulation at different locations along a single transmission line or one of several parallel-connected transmission lines. At the same time, a combination method is proposed to eliminate the undesired transients caused by the current reference step changes while switching between the fault statemore » and the normal state. Experiment results verify the developed transmission line three-phase short-circuit fault emulation capability.« less

  4. Development of a Converter-Based Transmission Line Emulator with Three-Phase Short-Circuit Fault Emulation Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Shuoting; Liu, Bo; Zheng, Sheng

    A transmission line emulator has been developed to flexibly represent interconnected ac lines under normal operating conditions in a voltage source converter (VSC)-based power system emulation platform. As the most serious short-circuit fault condition, the three-phase short-circuit fault emulation is essential for power system studies. Here, this paper proposes a model to realize a three-phase short-circuit fault emulation at different locations along a single transmission line or one of several parallel-connected transmission lines. At the same time, a combination method is proposed to eliminate the undesired transients caused by the current reference step changes while switching between the fault statemore » and the normal state. Experiment results verify the developed transmission line three-phase short-circuit fault emulation capability.« less

  5. Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Perotti, Jose M. (Compiler)

    2014-01-01

    This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  6. UAVSAR for the Management of Natural Disasters

    NASA Astrophysics Data System (ADS)

    Lou, Y.; Hensley, S.; Jones, C. E.

    2014-12-01

    The unique capabilities of imaging radar to penetrate cloud cover and collect data in darkness over large areas at high resolution makes it a key information provider for the management and mitigation of natural and human-induced disasters such as earthquakes, volcanoes, landslides, floods, and wildfires. Researchers have demonstrated the use of UAVSAR's fully polarimetric data to determine flood extent, forest fire extent, lava flow, and landslide. The ability for UAVSAR to provide high accuracy repeated flight tracks and precise imaging geometry for measuring surface deformation to a few centimeter accuracy using InSAR techniques. In fact, UAVSAR's repeat-pass interferometry capability unleashed new potential approaches to manage the risk of natural disasters prior to the occurrence of these events by modeling and monitoring volcano inflation, earthquake fault movements, landslide rate and extent, and sink hole precursory movement. In this talk we will present examples of applications of UAVSAR for natural disaster management. This research was conducted at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  7. Fault detection monitor circuit provides ''self-heal capability'' in electronic modules - A concept

    NASA Technical Reports Server (NTRS)

    Kennedy, J. J.

    1970-01-01

    Self-checking technique detects defective solid state modules used in electronic test and checkout instrumentation. A ten bit register provides failure monitor and indication for 1023 comparator circuits, and the automatic fault-isolation capability permits the electronic subsystems to be repaired by replacing the defective module.

  8. A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Finelli, George B.

    1987-01-01

    Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.

  9. An effective approach for road asset management through the FDTD simulation of the GPR signal

    NASA Astrophysics Data System (ADS)

    Benedetto, Andrea; Pajewski, Lara; Adabi, Saba; Kusayanagi, Wolfgang; Tosti, Fabio

    2015-04-01

    Ground-penetrating radar is a non-destructive tool widely used in many fields of application including pavement engineering surveys. Over the last decade, the need for further breakthroughs capable to assist end-users and practitioners as decision-support systems in more effective road asset management is increasing. In more details and despite the high potential and the consolidated results obtained over years by this non-destructive tool, pavement distress manuals are still based on visual inspections, so that only the effects and not the causes of faults are generally taken into account. In this framework, the use of simulation can represent an effective solution for supporting engineers and decision-makers in understanding the deep responses of both revealed and unrevealed damages. In this study, the potential of using finite-difference time-domain simulation of the ground-penetrating radar signal is analyzed by simulating several types of flexible pavement at different center frequencies of investigation typically used for road surveys. For these purposes, the numerical simulator GprMax2D, implementing the finite-difference time-domain method, was used, proving to be a highly effective tool for detecting road faults. In more details, comparisons with simplified undisturbed modelled pavement sections were carried out showing promising agreements with theoretical expectations, and good chances for detecting the shape of damages are demonstrated. Therefore, electromagnetic modelling has proved to represent a valuable support system in diagnosing the causes of damages, even for early or unrevealed faults. Further perspectives of this research will be focused on the modelling of more complex scenarios capable to represent more accurately the real boundary conditions of road cross-sections. Acknowledgements - This work has benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".

  10. Using EMIS to Identify Top Opportunities for Commercial Building Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guanjing; Singla, Rupam; Granderson, Jessica

    Energy Management and Information Systems (EMIS) comprise a broad family of tools and services to manage commercial building energy use. These technologies offer a mix of capabilities to store, display, and analyze energy use and system data, and in some cases, provide control. EMIS technologies enable 10–20 percent site energy savings in best practice implementations. Energy Information System (EIS) and Fault Detection and Diagnosis (FDD) systems are two key technologies in the EMIS family. Energy Information Systems are broadly defined as the web-based software, data acquisition hardware, and communication systems used to analyze and display building energy performance. At amore » minimum, an EIS provides daily, hourly or sub-hourly interval meter data at the whole-building level, with graphical and analytical capability. Fault Detection and Diagnosis systems automatically identify heating, ventilation, and air-conditioning (HVAC) system or equipment-level performances issues, and in some cases are able to isolate the root causes of the problem. They use computer algorithms to continuously analyze system-level operational data to detect faults and diagnose their causes. Many FDD tools integrate the trend log data from a Building Automation System (BAS) but otherwise are stand-alone software packages; other types of FDD tools are implemented as “on-board” equipment-embedded diagnostics. (This document focuses on the former.) Analysis approaches adopted in FDD technologies span a variety of techniques from rule-based methods to process history-based approaches. FDD tools automate investigations that can be conducted via manual data inspection by someone with expert knowledge, thereby expanding accessibility and breath of analysis opportunity, and also reducing complexity.« less

  11. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  12. Graphical workstation capability for reliability modeling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.

    1992-01-01

    In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.

  13. Vehicle fault diagnostics and management system

    NASA Astrophysics Data System (ADS)

    Gopal, Jagadeesh; Gowthamsachin

    2017-11-01

    This project is a kind of advanced automatic identification technology, and is more and more widely used in the fields of transportation and logistics. It looks over the main functions with like Vehicle management, Vehicle Speed limit and Control. This system starts with authentication process to keep itself secure. Here we connect sensors to the STM32 board which in turn is connected to the car through Ethernet cable, as Ethernet in capable of sending large amounts of data at high speeds. This technology involved clearly shows how a careful combination of software and hardware can produce an extremely cost-effective solution to a problem.

  14. The exponential rise of induced seismicity with increasing stress levels in the Groningen gas field and its implications for controlling seismic risk

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.; van Elk, J.

    2018-06-01

    Induced seismicity typically arises from the progressive activation of recently inactive geological faults by anthropogenic activity. Faults are mechanically and geometrically heterogeneous, so their extremes of stress and strength govern the initial evolution of induced seismicity. We derive a statistical model of Coulomb stress failures and associated aftershocks within the tail of the distribution of fault stress and strength variations to show initial induced seismicity rates will increase as an exponential function of induced stress. Our model provides operational forecasts consistent with the observed space-time-magnitude distribution of earthquakes induced by gas production from the Groningen field in the Netherlands. These probabilistic forecasts also match the observed changes in seismicity following a significant and sustained decrease in gas production rates designed to reduce seismic hazard and risk. This forecast capability allows reliable assessment of alternative control options to better inform future induced seismic risk management decisions.

  15. An adaptive unsaturated bistable stochastic resonance method and its application in mechanical fault diagnosis

    NASA Astrophysics Data System (ADS)

    Qiao, Zijian; Lei, Yaguo; Lin, Jing; Jia, Feng

    2017-02-01

    In mechanical fault diagnosis, most traditional methods for signal processing attempt to suppress or cancel noise imbedded in vibration signals for extracting weak fault characteristics, whereas stochastic resonance (SR), as a potential tool for signal processing, is able to utilize the noise to enhance fault characteristics. The classical bistable SR (CBSR), as one of the most widely used SR methods, however, has the disadvantage of inherent output saturation. The output saturation not only reduces the output signal-to-noise ratio (SNR) but also limits the enhancement capability for fault characteristics. To overcome this shortcoming, a novel method is proposed to extract the fault characteristics, where a piecewise bistable potential model is established. Simulated signals are used to illustrate the effectiveness of the proposed method, and the results show that the method is able to extract weak fault characteristics and has good enhancement performance and anti-noise capability. Finally, the method is applied to fault diagnosis of bearings and planetary gearboxes, respectively. The diagnosis results demonstrate that the proposed method can obtain larger output SNR, higher spectrum peaks at fault characteristic frequencies and therefore larger recognizable degree than the CBSR method.

  16. Fault detection on a sewer network by a combination of a Kalman filter and a binary sequential probability ratio test

    NASA Astrophysics Data System (ADS)

    Piatyszek, E.; Voignier, P.; Graillot, D.

    2000-05-01

    One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.

  17. Tutorial: Advanced fault tree applications using HARP

    NASA Technical Reports Server (NTRS)

    Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.

    1993-01-01

    Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

  18. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    NASA Technical Reports Server (NTRS)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  19. Unattended network operations technology assessment study. Technical support for defining advanced satellite systems concepts

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Holdridge, Mark; Odubiyi, Jide; Jaworski, Allan; Morgan, Herbert K.

    1991-01-01

    The results are summarized of an unattended network operations technology assessment study for the Space Exploration Initiative (SEI). The scope of the work included: (1) identified possible enhancements due to the proposed Mars communications network; (2) identified network operations on Mars; (3) performed a technology assessment of possible supporting technologies based on current and future approaches to network operations; and (4) developed a plan for the testing and development of these technologies. The most important results obtained are as follows: (1) addition of a third Mars Relay Satellite (MRS) and MRS cross link capabilities will enhance the network's fault tolerance capabilities through improved connectivity; (2) network functions can be divided into the six basic ISO network functional groups; (3) distributed artificial intelligence technologies will augment more traditional network management technologies to form the technological infrastructure of a virtually unattended network; and (4) a great effort is required to bring the current network technology levels for manned space communications up to the level needed for an automated fault tolerance Mars communications network.

  20. Adaptive and technology-independent architecture for fault-tolerant distributed AAL solutions.

    PubMed

    Schmidt, Michael; Obermaisser, Roman

    2018-04-01

    Today's architectures for Ambient Assisted Living (AAL) must cope with a variety of challenges like flawless sensor integration and time synchronization (e.g. for sensor data fusion) while abstracting from the underlying technologies at the same time. Furthermore, an architecture for AAL must be capable to manage distributed application scenarios in order to support elderly people in all situations of their everyday life. This encompasses not just life at home but in particular the mobility of elderly people (e.g. when going for a walk or having sports) as well. Within this paper we will introduce a novel architecture for distributed AAL solutions whose design follows a modern Microservices approach by providing small core services instead of a monolithic application framework. The architecture comprises core services for sensor integration, and service discovery while supporting several communication models (periodic, sporadic, streaming). We extend the state-of-the-art by introducing a fault-tolerance model for our architecture on the basis of a fault-hypothesis describing the fault-containment regions (FCRs) with their respective failure modes and failure rates in order to support safety-critical AAL applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. PDSS/IMC requirements and functional specifications

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The system (software and hardware) requirements for the Payload Development Support System (PDSS)/Image Motion Compensator (IMC) are provided. The PDSS/IMC system provides the capability for performing Image Motion Compensator Electronics (IMCE) flight software test, checkout, and verification and provides the capability for monitoring the IMC flight computer system during qualification testing for fault detection and fault isolation.

  2. Optimization of Second Fault Detection Thresholds to Maximize Mission POS

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan

    2018-01-01

    In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.

  3. A Solid-State Fault Current Limiting Device for VSC-HVDC Systems

    NASA Astrophysics Data System (ADS)

    Larruskain, D. Marene; Zamora, Inmaculada; Abarrategui, , Oihane; Iturregi, Araitz

    2013-08-01

    Faults in the DC circuit constitute one of the main limitations of voltage source converter VSC-HVDC systems, as the high fault currents can damage seriously the converters. In this article, a new design for a fault current limiter (FCL) is proposed, which is capable of limiting the fault current as well as interrupting it, isolating the DC grid. The operation of the proposed FCL is analysed and verified with the most usual faults that can occur in overhead lines.

  4. Overview of NASA's In Space Robotic Servicing

    NASA Technical Reports Server (NTRS)

    Reed, Benjamin B.

    2015-01-01

    The panel discussion will start with a presentation of the work of the Satellite Servicing Capabilities Office (SSCO), a team responsible for the overall management, coordination, and implementation of satellite servicing technologies and capabilities for NASA. Born from the team that executed the five Hubble servicing missions, SSCO is now maturing a core set of technologies that support both servicing goals and NASA's exploration and science objectives, including: autonomous rendezvous and docking systems; dexterous robotics; high-speed, fault-tolerant computing; advanced robotic tools, and propellant transfer systems. SSCOs proposed Restore-L mission, under development since 2009, is rapidly advancing the core capabilities the fledgling satellite-servicing industry needs to jumpstart a new national industry. Restore-L is also providing key technologies and core expertise to the Asteroid Redirect Robotic Mission (ARRM), with SSCO serving as the capture module lead for the ARRM effort. Reed will present a brief overview of SSCOs history, capabilities and technologies.

  5. An autonomous fault detection, isolation, and recovery system for a 20-kHz electric power distribution test bed

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Walters, Jerry L.

    1991-01-01

    Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.

  6. High level organizing principles for display of systems fault information for commercial flight crews

    NASA Technical Reports Server (NTRS)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  7. Demonstrating artificial intelligence for space systems - Integration and project management issues

    NASA Technical Reports Server (NTRS)

    Hack, Edmund C.; Difilippo, Denise M.

    1990-01-01

    As part of its Systems Autonomy Demonstration Project (SADP), NASA has recently demonstrated the Thermal Expert System (TEXSYS). Advanced real-time expert system and human interface technology was successfully developed and integrated with conventional controllers of prototype space hardware to provide intelligent fault detection, isolation, and recovery capability. Many specialized skills were required, and responsibility for the various phases of the project therefore spanned multiple NASA centers, internal departments and contractor organizations. The test environment required communication among many types of hardware and software as well as between many people. The integration, testing, and configuration management tools and methodologies which were applied to the TEXSYS project to assure its safe and successful completion are detailed. The project demonstrated that artificial intelligence technology, including model-based reasoning, is capable of the monitoring and control of a large, complex system in real time.

  8. Intelligent fault management for the Space Station active thermal control system

    NASA Technical Reports Server (NTRS)

    Hill, Tim; Faltisco, Robert M.

    1992-01-01

    The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.

  9. High-resolution mapping of two large-scale transpressional fault zones in the California Continental Borderland: Santa Cruz-Catalina Ridge and Ferrelo faults

    NASA Astrophysics Data System (ADS)

    Legg, Mark R.; Kohler, Monica D.; Shintaku, Natsumi; Weeraratne, Dayanthie S.

    2015-05-01

    New mapping of two active transpressional fault zones in the California Continental Borderland, the Santa Cruz-Catalina Ridge fault and the Ferrelo fault, was carried out to characterize their geometries, using over 4500 line-km of new multibeam bathymetry data collected in 2010 combined with existing data. Faults identified from seafloor morphology were verified in the subsurface using existing seismic reflection data including single-channel and multichannel seismic profiles compiled over the past three decades. The two fault systems are parallel and are capable of large lateral offsets and reverse slip during earthquakes. The geometry of the fault systems shows evidence of multiple segments that could experience throughgoing rupture over distances exceeding 100 km. Published earthquake hypocenters from regional seismicity studies further define the lateral and depth extent of the historic fault ruptures. Historical and recent focal mechanisms obtained from first-motion and moment tensor studies confirm regional strain partitioning dominated by right slip on major throughgoing faults with reverse-oblique mechanisms on adjacent structures. Transpression on west and northwest trending structures persists as far as 270 km south of the Transverse Ranges; extension persists in the southern Borderland. A logjam model describes the tectonic evolution of crustal blocks bounded by strike-slip and reverse faults which are restrained from northwest displacement by the Transverse Ranges and the southern San Andreas fault big bend. Because of their potential for dip-slip rupture, the faults may also be capable of generating local tsunamis that would impact Southern California coastlines, including populated regions in the Channel Islands.

  10. Development of a microprocessor controller for stand-alone photovoltaic power systems

    NASA Technical Reports Server (NTRS)

    Millner, A. R.; Kaufman, D. L.

    1984-01-01

    A controller for stand-alone photovoltaic systems has been developed using a low power CMOS microprocessor. It performs battery state of charge estimation, array control, load management, instrumentation, automatic testing, and communications functions. Array control options are sequential subarray switching and maximum power control. A calculator keypad and LCD display provides manual control, fault diagnosis and digital multimeter functions. An RS-232 port provides data logging or remote control capability. A prototype 5 kW unit has been built and tested successfully. The controller is expected to be useful in village photovoltaic power systems, large solar water pumping installations, and other battery management applications.

  11. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.

    1991-01-01

    Significant advances have occurred during the last decade in intelligent systems technologies (a.k.a. knowledge-based systems, KBS) including research, feasibility demonstrations, and technology implementations in operational environments. Evaluation and simulation data obtained to date in real-time operational environments suggest that cost-effective utilization of intelligent systems technologies can be realized for Automated Rendezvous and Capture applications. The successful implementation of these technologies involve a complex system infrastructure integrating the requirements of transportation, vehicle checkout and health management, and communication systems without compromise to systems reliability and performance. The resources that must be invoked to accomplish these tasks include remote ground operations and control, built-in system fault management and control, and intelligent robotics. To ensure long-term evolution and integration of new validated technologies over the lifetime of the vehicle, system interfaces must also be addressed and integrated into the overall system interface requirements. An approach for defining and evaluating the system infrastructures including the testbed currently being used to support the on-going evaluations for the evolutionary Space Station Freedom Data Management System is presented and discussed. Intelligent system technologies discussed include artificial intelligence (real-time replanning and scheduling), high performance computational elements (parallel processors, photonic processors, and neural networks), real-time fault management and control, and system software development tools for rapid prototyping capabilities.

  12. Concurrent development of fault management hardware and software in the SSM/PMAD. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Freeman, Kenneth A.; Walsh, Rick; Weeks, David J.

    1988-01-01

    Space Station issues in fault management are discussed. The system background is described with attention given to design guidelines and power hardware. A contractually developed fault management system, FRAMES, is integrated with the energy management functions, the control switchgear, and the scheduling and operations management functions. The constraints that shaped the FRAMES system and its implementation are considered.

  13. Modeling the Fault Tolerant Capability of a Flight Control System: An Exercise in SCR Specification

    NASA Technical Reports Server (NTRS)

    Alexander, Chris; Cortellessa, Vittorio; DelGobbo, Diego; Mili, Ali; Napolitano, Marcello

    2000-01-01

    In life-critical and mission-critical applications, it is important to make provisions for a wide range of contingencies, by providing means for fault tolerance. In this paper, we discuss the specification of a flight control system that is fault tolerant with respect to sensor faults. Redundancy is provided by analytical relations that hold between sensor readings; depending on the conditions, this redundancy can be used to detect, identify and accommodate sensor faults.

  14. The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters

    NASA Technical Reports Server (NTRS)

    Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)

    1998-01-01

    We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.

  15. Adding Fault Tolerance to NPB Benchmarks Using ULFM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchman, Zachary W; Vallee, Geoffroy R; Naughton III, Thomas J

    2016-01-01

    In the world of high-performance computing, fault tolerance and application resilience are becoming some of the primary concerns because of increasing hardware failures and memory corruptions. While the research community has been investigating various options, from system-level solutions to application-level solutions, standards such as the Message Passing Interface (MPI) are also starting to include such capabilities. The current proposal for MPI fault tolerant is centered around the User-Level Failure Mitigation (ULFM) concept, which provides means for fault detection and recovery of the MPI layer. This approach does not address application-level recovery, which is currently left to application developers. In thismore » work, we present a mod- ification of some of the benchmarks of the NAS parallel benchmark (NPB) to include support of the ULFM capabilities as well as application-level strategies and mechanisms for application-level failure recovery. As such, we present: (i) an application-level library to checkpoint and restore data, (ii) extensions of NPB benchmarks for fault tolerance based on different strategies, (iii) a fault injection tool, and (iv) some preliminary results that show the impact of such fault tolerant strategies on the application execution.« less

  16. Spectrum auto-correlation analysis and its application to fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.

    2013-12-01

    Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.

  17. Adaptive Hierarchical Voltage Control of a DFIG-Based Wind Power Plant for a Grid Fault

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Park, Jung-Wook

    This paper proposes an adaptive hierarchical voltage control scheme of a doubly-fed induction generator (DFIG)-based wind power plant (WPP) that can secure more reserve of reactive power (Q) in the WPP against a grid fault. To achieve this, each DFIG controller employs an adaptive reactive power to voltage (Q-V) characteristic. The proposed adaptive Q-V characteristic is temporally modified depending on the available Q capability of a DFIG; it is dependent on the distance from a DFIG to the point of common coupling (PCC). The proposed characteristic secures more Q reserve in the WPP than the fixed one. Furthermore, it allowsmore » DFIGs to promptly inject up to the Q limit, thereby improving the PCC voltage support. To avert an overvoltage after the fault clearance, washout filters are implemented in the WPP and DFIG controllers; they can prevent a surplus Q injection after the fault clearance by eliminating the accumulated values in the proportional-integral controllers of both controllers during the fault. Test results demonstrate that the scheme can improve the voltage support capability during the fault and suppress transient overvoltage after the fault clearance under scenarios of various system and fault conditions; therefore, it helps ensure grid resilience by supporting the voltage stability.« less

  18. Model Based Inference for Wire Chafe Diagnostics

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Wheeler, Kevin R.; Timucin, Dogan A.; Wysocki, Philip F.; Kowalski, Marc Edward

    2009-01-01

    Presentation for Aging Aircraft conference covering chafing fault diagnostics using Time Domain Reflectometry. Laboratory setup and experimental methods are presented, along with initial results that summarize fault modeling and detection capabilities.

  19. The Orion GN and C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery

    NASA Technical Reports Server (NTRS)

    King, Ellis; Hart, Jeremy; Odegard, Ryan

    2010-01-01

    The Orion Crew Exploration Vehicle (CET) is being designed to include significantly more automation capability than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. A data-driven flight software architecture will provide an evolvable automation capability to sequence through Guidance, Navigation & Control (GN&C) flight software modes and configurations while maintaining the required flexibility and human control over the automation. This flexibility is a key aspect needed to address the maturation of operational concepts, to permit ground and crew operators to gain trust in the system and mitigate unpredictability in human spaceflight. To allow for mission flexibility and reconfrgurability, a data driven approach is being taken to load the mission event plan as well cis the flight software artifacts associated with the GN&C subsystem. A database of GN&C level sequencing data is presented which manages and tracks the mission specific and algorithm parameters to provide a capability to schedule GN&C events within mission segments. The flight software data schema for performing automated mission sequencing is presented with a concept of operations for interactions with ground and onboard crew members. A prototype architecture for fault identification, isolation and recovery interactions with the automation software is presented and discussed as a forward work item.

  20. Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.

    2007-03-01

    In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.

  1. Final Project Report: Imaging Fault Zones Using a Novel Elastic Reverse-Time Migration Imaging Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Chen, Ting; Tan, Sirui

    Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less

  2. Framework for a space shuttle main engine health monitoring system

    NASA Technical Reports Server (NTRS)

    Hawman, Michael W.; Galinaitis, William S.; Tulpule, Sharayu; Mattedi, Anita K.; Kamenetz, Jeffrey

    1990-01-01

    A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available.

  3. Fault displacement hazard assessment for nuclear installations based on IAEA safety standards

    NASA Astrophysics Data System (ADS)

    Fukushima, Y.

    2016-12-01

    In the IAEA Safety NS-R-3, surface fault displacement hazard assessment (FDHA) is required for the siting of nuclear installations. If any capable faults exist in the candidate site, IAEA recommends the consideration of alternative sites. However, due to the progress in palaeoseismological investigations, capable faults may be found in existing site. In such a case, IAEA recommends to evaluate the safety using probabilistic FDHA (PFDHA), which is an empirical approach based on still quite limited database. Therefore a basic and crucial improvement is to increase the database. In 2015, IAEA produced a TecDoc-1767 on Palaeoseismology as a reference for the identification of capable faults. Another IAEA Safety Report 85 on ground motion simulation based on fault rupture modelling provides an annex introducing recent PFDHAs and fault displacement simulation methodologies. The IAEA expanded the project of FDHA for the probabilistic approach and the physics based fault rupture modelling. The first approach needs a refinement of the empirical methods by building a world wide database, and the second approach needs to shift from kinematic to the dynamic scheme. Both approaches can complement each other, since simulated displacement can fill the gap of a sparse database and geological observations can be useful to calibrate the simulations. The IAEA already supported a workshop in October 2015 to discuss the existing databases with the aim of creating a common worldwide database. A consensus of a unified database was reached. The next milestone is to fill the database with as many fault rupture data sets as possible. Another IAEA work group had a WS in November 2015 to discuss the state-of-the-art PFDHA as well as simulation methodologies. Two groups jointed a consultancy meeting in February 2016, shared information, identified issues, discussed goals and outputs, and scheduled future meetings. Now we may aim at coordinating activities for the whole FDHA tasks jointly.

  4. GTEX: An expert system for diagnosing faults in satellite ground stations

    NASA Technical Reports Server (NTRS)

    Schlegelmilch, Richard F.; Durkin, John; Petrik, Edward J.

    1991-01-01

    A proof of concept expert system called Ground Terminal Expert (GTEX) was developed at The University of Akron in collaboration with NASA Lewis Research Center. The objective of GTEX is to aid in diagnosing data faults occurring with a digital ground terminal. This strategy can also be applied to the Very Small Aperture Terminal (VSAT) technology. An expert system which detects and diagnoses faults would enhance the performance of the VSAT by improving reliability and reducing maintenance time. GTEX is capable of detecting faults, isolating the cause and recommending appropriate actions. Isolation of faults is completed to board-level modules. A graphical user interface provides control and a medium where data can be requested and cryptic information logically displayed. Interaction with GTEX consists of user responses and input from data files. The use of data files provides a method of simulating dynamic interaction between the digital ground terminal and the expert system. GTEX as described is capable of both improving reliability and reducing the time required for necessary maintenance.

  5. GTEX: An expert system for diagnosing faults in satellite ground stations

    NASA Astrophysics Data System (ADS)

    Schlegelmilch, Richard F.; Durkin, John; Petrik, Edward J.

    1991-11-01

    A proof of concept expert system called Ground Terminal Expert (GTEX) was developed at The University of Akron in collaboration with NASA Lewis Research Center. The objective of GTEX is to aid in diagnosing data faults occurring with a digital ground terminal. This strategy can also be applied to the Very Small Aperture Terminal (VSAT) technology. An expert system which detects and diagnoses faults would enhance the performance of the VSAT by improving reliability and reducing maintenance time. GTEX is capable of detecting faults, isolating the cause and recommending appropriate actions. Isolation of faults is completed to board-level modules. A graphical user interface provides control and a medium where data can be requested and cryptic information logically displayed. Interaction with GTEX consists of user responses and input from data files. The use of data files provides a method of simulating dynamic interaction between the digital ground terminal and the expert system. GTEX as described is capable of both improving reliability and reducing the time required for necessary maintenance.

  6. Discovering the Complexity of Capable Faults in Northern Chile

    NASA Astrophysics Data System (ADS)

    Gonzalez, G.; del Río, I. A.; Rojas Orrego, C., Sr.; Astudillo, L. A., Sr.

    2017-12-01

    Great crustal earthquakes (Mw >7.0) in the upper plate of subduction zones are relatively uncommon and less well documented. We hypothesize that crustal earthquakes are poorly represented in the instrumental record because they have long recurrence intervals. In northern Chile, the extreme long-term aridity permits extraordinary preservation of landforms related to fault activity, making this region a primary target to understand how upper plate faults work at subduction zones. To understand how these faults relate to crustal seismicity in the long-term, we have conducted a detailed palaeoseismological study. We performed a palaeoseismological survey integrating trench logging and photogrammetry based on UAVs. Optically stimulated luminescence (OSL) age determinations were practiced for dating deposits linked to faulting. In this contribution we present the study case of two primary faults located in the Coastal Cordillera of northern Chile between Iquique (21ºS) and Antofagasta (24ºS). We estimate the maximum moment magnitude of earthquakes generated in these upper plate faults, their recurrence interval and the fault-slip rate. We conclude that the studied upper plate faults show a complex kinematics on geological timescales. Faults seem to change their kinematics from normal (extension) to reverse (compression) or from normal to transcurrent (compression) according to the stage of subduction earthquake cycle. Normal displacement is related to coseismic stages and compression is linked to interseismic period. As result this complex interaction these faults are capable of generating Mw 7.0 earthquakes, with recurrence times on the order of thousands of years during every stage of the subduction earthquake cycle.

  7. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less

  8. Integrated software health management for aerospace guidance, navigation, and control systems: A probabilistic reasoning approach

    NASA Astrophysics Data System (ADS)

    Mbaya, Timmy

    Embedded Aerospace Systems have to perform safety and mission critical operations in a real-time environment where timing and functional correctness are extremely important. Guidance, Navigation, and Control (GN&C) systems substantially rely on complex software interfacing with hardware in real-time; any faults in software or hardware, or their interaction could result in fatal consequences. Integrated Software Health Management (ISWHM) provides an approach for detection and diagnosis of software failures while the software is in operation. The ISWHM approach is based on probabilistic modeling of software and hardware sensors using a Bayesian network. To meet memory and timing constraints of real-time embedded execution, the Bayesian network is compiled into an Arithmetic Circuit, which is used for on-line monitoring. This type of system monitoring, using an ISWHM, provides automated reasoning capabilities that compute diagnoses in a timely manner when failures occur. This reasoning capability enables time-critical mitigating decisions and relieves the human agent from the time-consuming and arduous task of foraging through a multitude of isolated---and often contradictory---diagnosis data. For the purpose of demonstrating the relevance of ISWHM, modeling and reasoning is performed on a simple simulated aerospace system running on a real-time operating system emulator, the OSEK/Trampoline platform. Models for a small satellite and an F-16 fighter jet GN&C (Guidance, Navigation, and Control) system have been implemented. Analysis of the ISWHM is then performed by injecting faults and analyzing the ISWHM's diagnoses.

  9. Implementation of a Goal-Based Systems Engineering Process Using the Systems Modeling Language (SysML)

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.

    2013-01-01

    Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.

  10. Fault management and systems knowledge

    DOT National Transportation Integrated Search

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  11. Imperfect construction of microclusters

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Zhou, K.; Gilbert, G.; Weinstein, Y. S.

    2014-01-01

    Microclusters are the basic building blocks used to construct cluster states capable of supporting fault-tolerant quantum computation. In this paper, we explore the consequences of errors on microcluster construction using two error models. To quantify the effect of the errors we calculate the fidelity of the constructed microclusters and the fidelity with which two such microclusters can be fused together. Such simulations are vital for gauging the capability of an experimental system to achieve fault tolerance.

  12. Rock mechanics. Superplastic nanofibrous slip zones control seismogenic fault friction.

    PubMed

    Verberne, Berend A; Plümper, Oliver; de Winter, D A Matthijs; Spiers, Christopher J

    2014-12-12

    Understanding the internal mechanisms controlling fault friction is crucial for understanding seismogenic slip on active faults. Displacement in such fault zones is frequently localized on highly reflective (mirrorlike) slip surfaces, coated with thin films of nanogranular fault rock. We show that mirror-slip surfaces developed in experimentally simulated calcite faults consist of aligned nanogranular chains or fibers that are ductile at room conditions. These microstructures and associated frictional data suggest a fault-slip mechanism resembling classical Ashby-Verrall superplasticity, capable of producing unstable fault slip. Diffusive mass transfer in nanocrystalline calcite gouge is shown to be fast enough for this mechanism to control seismogenesis in limestone terrains. With nanogranular fault surfaces becoming increasingly recognized in crustal faults, the proposed mechanism may be generally relevant to crustal seismogenesis. Copyright © 2014, American Association for the Advancement of Science.

  13. Has El Salvador Fault Zone produced M ≥ 7.0 earthquakes? The 1719 El Salvador earthquake

    NASA Astrophysics Data System (ADS)

    Canora, C.; Martínez-Díaz, J.; Álvarez-Gómez, J.; Villamor, P.; Ínsua-Arévalo, J.; Alonso-Henar, J.; Capote, R.

    2013-05-01

    Historically, large earthquakes, Mw ≥ 7.0, in the Εl Salvador area have been attributed to activity in the Cocos-Caribbean subduction zone. Τhis is correct for most of the earthquakes of magnitude greater than 6.5. However, recent paleoseismic evidence points to the existence of large earthquakes associated with rupture of the Εl Salvador Fault Ζone, an Ε-W oriented strike slip fault system that extends for 150 km through central Εl Salvador. Τo calibrate our results from paleoseismic studies, we have analyzed the historical seismicity of the area. In particular, we suggest that the 1719 earthquake can be associated with paleoseismic activity evidenced in the Εl Salvador Fault Ζone. Α reinterpreted isoseismal map for this event suggests that the damage reported could have been a consequence of the rupture of Εl Salvador Fault Ζone, rather than rupture of the subduction zone. Τhe isoseismal is not different to other upper crustal earthquakes in similar tectonovolcanic environments. We thus challenge the traditional assumption that only the subduction zone is capable of generating earthquakes of magnitude greater than 7.0 in this region. Τhis result has broad implications for future risk management in the region. Τhe potential occurrence of strong ground motion, significantly higher and closer to the Salvadorian populations that those assumed to date, must be considered in seismic hazard assessment studies in this area.

  14. Developing a Fault Management Guidebook for Nasa's Deep Space Robotic Missions

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Jacome, Raquel Weitl

    2015-01-01

    NASA designs and builds systems that achieve incredibly ambitious goals, as evidenced by the Curiosity rover traversing on Mars, the highly complex International Space Station orbiting our Earth, and the compelling plans for capturing, retrieving and redirecting an asteroid into a lunar orbit to create a nearby a target to be investigated by astronauts. In order to accomplish these feats, the missions must be imbued with sufficient knowledge and capability not only to realize the goals, but also to identify and respond to off-nominal conditions. Fault Management (FM) is the discipline of establishing how a system will respond to preserve its ability to function even in the presence of faults. In 2012, NASA released a draft FM Handbook in an attempt to coalesce the field by establishing a unified terminology and a common process for designing FM mechanisms. However, FM approaches are very diverse across NASA, especially between the different mission types such as Earth orbiters, launch vehicles, deep space robotic vehicles and human spaceflight missions, and the authors were challenged to capture and represent all of these views. The authors recognized that a necessary precursor step is for each sub-community to codify its FM policies, practices and approaches in individual, focused guidebooks. Then, the sub-communities can look across NASA to better understand the different ways off-nominal conditions are addressed, and to seek commonality or at least an understanding of the multitude of FM approaches. This paper describes the development of the "Deep Space Robotic Fault Management Guidebook," which is intended to be the first of NASA's FM guidebooks. Its purpose is to be a field-guide for FM practitioners working on deep space robotic missions, as well as a planning tool for project managers. Publication of this Deep Space Robotic FM Guidebook is expected in early 2015. The guidebook will be posted on NASA's Engineering Network on the FM Community of Practice website so that it will be available to all NASA projects. Future plans for subsequent guidebooks for the other NASA sub-communities are proposed.

  15. Partial and total actuator faults accommodation for input-affine nonlinear process plants.

    PubMed

    Mihankhah, Amin; Salmasi, Farzad R; Salahshoor, Karim

    2013-05-01

    In this paper, a new fault-tolerant control system is proposed for input-affine nonlinear plants based on Model Reference Adaptive System (MRAS) structure. The proposed method has the capability to accommodate both partial and total actuator failures along with bounded external disturbances. In this methodology, the conventional MRAS control law is modified by augmenting two compensating terms. One of these terms is added to eliminate the nonlinear dynamic, while the other is reinforced to compensate the distractive effects of the total actuator faults and external disturbances. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed method. Moreover, the control structure has good robustness capability against the parameter variation. The performance of this scheme is evaluated using a CSTR system and the results were satisfactory. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Fault diagnostic instrumentation design for environmental control and life support systems

    NASA Technical Reports Server (NTRS)

    Yang, P. Y.; You, K. C.; Wynveen, R. A.; Powell, J. D., Jr.

    1979-01-01

    As a development phase moves toward flight hardware, the system availability becomes an important design aspect which requires high reliability and maintainability. As part of continous development efforts, a program to evaluate, design, and demonstrate advanced instrumentation fault diagnostics was successfully completed. Fault tolerance designs for reliability and other instrumenation capabilities to increase maintainability were evaluated and studied.

  17. Fly-By-Light/Power-By-Wire Fault-Tolerant Fiber-Optic Backplane

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2002-01-01

    The design and development of a fault-tolerant fiber-optic backplane to demonstrate feasibility of such architecture is presented. The simulation results of test cases on the backplane in the advent of induced faults are presented, and the fault recovery capability of the architecture is demonstrated. The architecture was designed, developed, and implemented using the Very High Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL). The architecture was synthesized and implemented in hardware using Field Programmable Gate Arrays (FPGA) on multiple prototype boards.

  18. General Purpose Data-Driven Monitoring for Space Operations

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Martin, Rodney A.; Schwabacher, Mark A.; Spirkovska, Liljana; Taylor, William McCaa; Castle, Joseph P.; Mackey, Ryan M.

    2009-01-01

    As modern space propulsion and exploration systems improve in capability and efficiency, their designs are becoming increasingly sophisticated and complex. Determining the health state of these systems, using traditional parameter limit checking, model-based, or rule-based methods, is becoming more difficult as the number of sensors and component interactions grow. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults or failures. The Inductive Monitoring System (IMS) is a data-driven system health monitoring software tool that has been successfully applied to several aerospace applications. IMS uses a data mining technique called clustering to analyze archived system data and characterize normal interactions between parameters. The scope of IMS based data-driven monitoring applications continues to expand with current development activities. Successful IMS deployment in the International Space Station (ISS) flight control room to monitor ISS attitude control systems has led to applications in other ISS flight control disciplines, such as thermal control. It has also generated interest in data-driven monitoring capability for Constellation, NASA's program to replace the Space Shuttle with new launch vehicles and spacecraft capable of returning astronauts to the moon, and then on to Mars. Several projects are currently underway to evaluate and mature the IMS technology and complementary tools for use in the Constellation program. These include an experiment on board the Air Force TacSat-3 satellite, and ground systems monitoring for NASA's Ares I-X and Ares I launch vehicles. The TacSat-3 Vehicle System Management (TVSM) project is a software experiment to integrate fault and anomaly detection algorithms and diagnosis tools with executive and adaptive planning functions contained in the flight software on-board the Air Force Research Laboratory TacSat-3 satellite. The TVSM software package will be uploaded after launch to monitor spacecraft subsystems such as power and guidance, navigation, and control (GN&C). It will analyze data in real-time to demonstrate detection of faults and unusual conditions, diagnose problems, and react to threats to spacecraft health and mission goals. The experiment will demonstrate the feasibility and effectiveness of integrated system health management (ISHM) technologies with both ground and on-board experiments.

  19. Intelligent Hardware-Enabled Sensor and Software Safety and Health Management for Autonomous UAS

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Schumann, Johann; Ippolito, Corey

    2015-01-01

    Unmanned Aerial Systems (UAS) can only be deployed if they can effectively complete their mission and respond to failures and uncertain environmental conditions while maintaining safety with respect to other aircraft as well as humans and property on the ground. We propose to design a real-time, onboard system health management (SHM) capability to continuously monitor essential system components such as sensors, software, and hardware systems for detection and diagnosis of failures and violations of safety or performance rules during the ight of a UAS. Our approach to SHM is three-pronged, providing: (1) real-time monitoring of sensor and software signals; (2) signal analysis, preprocessing, and advanced on-the- y temporal and Bayesian probabilistic fault diagnosis; (3) an unobtrusive, lightweight, read-only, low-power hardware realization using Field Programmable Gate Arrays (FPGAs) in order to avoid overburdening limited computing resources or costly re-certi cation of ight software due to instrumentation. No currently available SHM capabilities (or combinations of currently existing SHM capabilities) come anywhere close to satisfying these three criteria yet NASA will require such intelligent, hardwareenabled sensor and software safety and health management for introducing autonomous UAS into the National Airspace System (NAS). We propose a novel approach of creating modular building blocks for combining responsive runtime monitoring of temporal logic system safety requirements with model-based diagnosis and Bayesian network-based probabilistic analysis. Our proposed research program includes both developing this novel approach and demonstrating its capabilities using the NASA Swift UAS as a demonstration platform.

  20. Characteristic investigation and control of a modular multilevel converter-based HVDC system under single-line-to-ground fault conditions

    DOE PAGES

    Shi, Xiaojie; Wang, Zhiqiang; Liu, Bo; ...

    2014-05-16

    This paper presents the analysis and control of a multilevel modular converter (MMC)-based HVDC transmission system under three possible single-line-to-ground fault conditions, with special focus on the investigation of their different fault characteristics. Considering positive-, negative-, and zero-sequence components in both arm voltages and currents, the generalized instantaneous power of a phase unit is derived theoretically according to the equivalent circuit model of the MMC under unbalanced conditions. Based on this model, a novel double-line frequency dc-voltage ripple suppression control is proposed. This controller, together with the negative-and zero-sequence current control, could enhance the overall fault-tolerant capability of the HVDCmore » system without additional cost. To further improve the fault-tolerant capability, the operation performance of the HVDC system with and without single-phase switching is discussed and compared in detail. Lastly, simulation results from a three-phase MMC-HVDC system generated with MATLAB/Simulink are provided to support the theoretical analysis and proposed control schemes.« less

  1. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  2. Fix-Forward: A Comparison of the Army’s Requirements and Capabilities for Forward Support Maintenance,

    DTIC Science & Technology

    1983-04-01

    tolerances or spaci - able assets diagnostic/fault ness float fications isolation devices Operation of cannibalL- zation point Why Sustain materiel...with diagnostic software based on "fault tree " representation of the M65 ThS) to bridge the gap in diagnostics capability was demonstrated in 1980 and... identification friend or foe) which has much lower reliability than TSQ-73 peculiar hardware). Thus, as in other examples, reported readiness does not reflect

  3. Integrated analysis of error detection and recovery

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1985-01-01

    An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.

  4. A New Partial Reconfiguration-Based Fault-Injection System to Evaluate SEU Effects in SRAM-Based FPGAs

    NASA Astrophysics Data System (ADS)

    Sterpone, L.; Violante, M.

    2007-08-01

    Modern SRAM-based field programmable gate array (FPGA) devices offer high capability in implementing complex system. Unfortunately, SRAM-based FPGAs are extremely sensitive to single event upsets (SEUs) induced by radiation particles. In order to successfully deploy safety- or mission-critical applications, designer need to validate the correctness of the obtained designs. In this paper we describe a system based on partial-reconfiguration for running fault-injection experiments within the configuration memory of SRAM-based FPGAs. The proposed fault-injection system uses the internal configuration capabilities that modern FPGAs offer in order to inject SEU within the configuration memory. Detailed experimental results show that the technique is orders of magnitude faster than previously proposed ones.

  5. Online Monitoring Technical Basis and Analysis Framework for Large Power Transformers; Interim Report for FY 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nancy J. Lybeck; Vivek Agarwal; Binh T. Pham

    The Light Water Reactor Sustainability program at Idaho National Laboratory (INL) is actively conducting research to develop and demonstrate online monitoring (OLM) capabilities for active components in existing Nuclear Power Plants. A pilot project is currently underway to apply OLM to Generator Step-Up Transformers (GSUs) and Emergency Diesel Generators (EDGs). INL and the Electric Power Research Institute (EPRI) are working jointly to implement the pilot project. The EPRI Fleet-Wide Prognostic and Health Management (FW-PHM) Software Suite will be used to implement monitoring in conjunction with utility partners: the Shearon Harris Nuclear Generating Station (owned by Duke Energy for GSUs, andmore » Braidwood Generating Station (owned by Exelon Corporation) for EDGs. This report presents monitoring techniques, fault signatures, and diagnostic and prognostic models for GSUs. GSUs are main transformers that are directly connected to generators, stepping up the voltage from the generator output voltage to the highest transmission voltages for supplying electricity to the transmission grid. Technical experts from Shearon Harris are assisting INL and EPRI in identifying critical faults and defining fault signatures associated with each fault. The resulting diagnostic models will be implemented in the FW-PHM Software Suite and tested using data from Shearon-Harris. Parallel research on EDGs is being conducted, and will be reported in an interim report during the first quarter of fiscal year 2013.« less

  6. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Stephen B.

    2010-01-01

    Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.

  7. Fenix, A Fault Tolerant Programming Framework for MPI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamel, Marc; Teranihi, Keita; Valenzuela, Eric

    2016-10-05

    Fenix provides APIs to allow the users to add fault tolerance capability to MPI-based parallel programs in a transparent manner. Fenix-enabled programs can run through process failures during program execution using a pool of spare processes accommodated by Fenix.

  8. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  9. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of faultmore » signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.« less

  10. Modeling the data management system of Space Station Freedom with DEPEND

    NASA Technical Reports Server (NTRS)

    Olson, Daniel P.; Iyer, Ravishankar K.; Boyd, Mark A.

    1993-01-01

    Some of the features and capabilities of the DEPEND simulation-based modeling tool are described. A study of a 1553B local bus subsystem of the Space Station Freedom Data Management System (SSF DMS) is used to illustrate some types of system behavior that can be important to reliability and performance evaluations of this type of spacecraft. A DEPEND model of the subsystem is used to illustrate how these types of system behavior can be modeled, and shows what kinds of engineering and design questions can be answered through the use of these modeling techniques. DEPEND's process-based simulation environment is shown to provide a flexible method for modeling complex interactions between hardware and software elements of a fault-tolerant computing system.

  11. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less

  12. Planetary Gearbox Fault Diagnosis Using a Single Piezoelectric Strain Sensor

    DTIC Science & Technology

    2014-12-23

    However, the fault detection of planetary gearbox is very complicate since the c omplex nature of dynamic rolling structure of p lanetary gearbox...vibration transfer paths due to the unique dynamic structure of rotating planet gears. Therefore, it is difficult to diagnose PGB faults via vibration...al. 2014). To overcome the above mentioned challenges in developing effective PGB fau lt diagnosis capability , a research investigation on

  13. Design of the Protocol Processor for the ROBUS-2 Communication System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.

    2005-01-01

    The ROBUS-2 Protocol Processor (RPP) is a custom-designed hardware component implementing the functionality of the ROBUS-2 fault-tolerant communication system. The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault tolerant integrated modular architecture currently under development at NASA Langley Research Center. ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, time reference (clock synchronization), and distributed diagnosis (group membership). ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 tolerates internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. ROBUS consists of RPPs connected to each other by a lower-level physical communication network. The RPP has a pipelined architecture and the design is parameterized in the behavioral and structural domains. The design of the RPP enables the bus to achieve a PE-message throughput that approaches the available bandwidth at the physical layer.

  14. Hierarchical Control Scheme for Improving Transient Voltage Recovery of a DFIG-Based WPP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Kang, Yong Cheol

    Modern grid codes require that wind power plants (WPPs) inject reactive power according to the voltage dip at a point of interconnection (POI). This requirement helps to support a POI voltage during a fault. However, if a fault is cleared, the POI and wind turbine generator (WTG) voltages are likely to exceed acceptable levels unless the WPP reduces the injected reactive power quickly. This might deteriorate the stability of a grid by allowing the disconnection of WTGs to avoid any damage. This paper proposes a hierarchical control scheme of a doubly-fed induction generator (DFIG)-based WPP. The proposed scheme aims tomore » improve the reactive power injecting capability during the fault and suppress the overvoltage after the fault clearance. To achieve the former, an adaptive reactive power-to-voltage scheme is implemented in each DFIG controller so that a DFIG with a larger reactive power capability will inject more reactive power. To achieve the latter, a washout filter is used to capture a high frequency component contained in the WPP voltage, which is used to remove the accumulated values in the proportional-integral controllers. Test results indicate that the scheme successfully supports the grid voltage during the fault, and recovers WPP voltages without exceeding the limit after the fault clearance.« less

  15. Rotorcraft Diagnostics

    NASA Technical Reports Server (NTRS)

    Haste, Deepak; Azam, Mohammad; Ghoshal, Sudipto; Monte, James

    2012-01-01

    Health management (HM) in any engineering systems requires adequate understanding about the system s functioning; a sufficient amount of monitored data; the capability to extract, analyze, and collate information; and the capability to combine understanding and information for HM-related estimation and decision-making. Rotorcraft systems are, in general, highly complex. Obtaining adequate understanding about functioning of such systems is quite difficult, because of the proprietary (restricted access) nature of their designs and dynamic models. Development of an EIM (exact inverse map) solution for rotorcraft requires a process that can overcome the abovementioned difficulties and maximally utilize monitored information for HM facilitation via employing advanced analytic techniques. The goal was to develop a versatile HM solution for rotorcraft for facilitation of the Condition Based Maintenance Plus (CBM+) capabilities. The effort was geared towards developing analytic and reasoning techniques, and proving the ability to embed the required capabilities on a rotorcraft platform, paving the way for implementing the solution on an aircraft-level system for consolidation and reporting. The solution for rotorcraft can he used offboard or embedded directly onto a rotorcraft system. The envisioned solution utilizes available monitored and archived data for real-time fault detection and identification, failure precursor identification, and offline fault detection and diagnostics, health condition forecasting, optimal guided troubleshooting, and maintenance decision support. A variant of the onboard version is a self-contained hardware and software (HW+SW) package that can be embedded on rotorcraft systems. The HM solution comprises components that gather/ingest data and information, perform information/feature extraction, analyze information in conjunction with the dependency/diagnostic model of the target system, facilitate optimal guided troubleshooting, and offer decision support for optimal maintenance.

  16. Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing

    DTIC Science & Technology

    2006-09-01

    Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant

  17. Second-order sliding mode control for DFIG-based wind turbines fault ride-through capability enhancement.

    PubMed

    Benbouzid, Mohamed; Beltran, Brice; Amirat, Yassine; Yao, Gang; Han, Jingang; Mangel, Hervé

    2014-05-01

    This paper deals with the fault ride-through capability assessment of a doubly fed induction generator-based wind turbine using a high-order sliding mode control. Indeed, it has been recently suggested that sliding mode control is a solution of choice to the fault ride-through problem. In this context, this paper proposes a second-order sliding mode as an improved solution that handle the classical sliding mode chattering problem. Indeed, the main and attractive features of high-order sliding modes are robustness against external disturbances, the grids faults in particular, and chattering-free behavior (no extra mechanical stress on the wind turbine drive train). Simulations using the NREL FAST code on a 1.5-MW wind turbine are carried out to evaluate ride-through performance of the proposed high-order sliding mode control strategy in case of grid frequency variations and unbalanced voltage sags. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A study of Quaternary structures in the Qom region, West Central Iran

    NASA Astrophysics Data System (ADS)

    Babaahmadi, A.; Safaei, H.; Yassaghi, A.; Vafa, H.; Naeimi, A.; Madanipour, S.; Ahmadi, M.

    2010-12-01

    West Central Iran comprises numerous Quaternary faults. Having either strike-slip or thrust mechanisms, these faults are potentially active and therefore capable of creating destructive earthquakes. In this paper, we use satellite images as well as field trips to identify these active faults in the Qom region. The Qom and Indes faults are the main NW-trending faults along which a Quaternary restraining step-over zone has formed. Kamarkuh, Mohsen Abad, and Ferdows anticlines are potentially active structures that formed in this restraining step-over zone. There are some thrusts and anticlines, such as the Alborz anticline and Alborz fault, which are parallel to strike-slip faults such as the Qom fault, indicating deformation partitioning in the area. In addition to NW-trending structures, there is an important NE-trending fault known as the Qomrud fault that has deformed Quaternary deposits and affected Kushk-e-Nosrat fault, Alborz anticline, and Qomrud River. The results of this study imply that the major Quaternary faults of West Central Iran and their restraining step-over zones are potentially active.

  19. A Design of Finite Memory Residual Generation Filter for Sensor Fault Detection

    NASA Astrophysics Data System (ADS)

    Kim, Pyung Soo

    2017-04-01

    In the current paper, a residual generation filter with finite memory structure is proposed for sensor fault detection. The proposed finite memory residual generation filter provides the residual by real-time filtering of fault vector using only the most recent finite measurements and inputs on the window. It is shown that the residual given by the proposed residual generation filter provides the exact fault for noisefree systems. The proposed residual generation filter is specified to the digital filter structure for the amenability to hardware implementation. Finally, to illustrate the capability of the proposed residual generation filter, extensive simulations are performed for the discretized DC motor system with two types of sensor faults, incipient soft bias-type fault and abrupt bias-type fault. In particular, according to diverse noise levels and windows lengths, meaningful simulation results are given for the abrupt bias-type fault.

  20. Economic modeling of fault tolerant flight control systems in commercial applications

    NASA Technical Reports Server (NTRS)

    Finelli, G. B.

    1982-01-01

    This paper describes the current development of a comprehensive model which will supply the assessment and analysis capability to investigate the economic viability of Fault Tolerant Flight Control Systems (FTFCS) for commercial aircraft of the 1990's and beyond. An introduction to the unique attributes of fault tolerance and how they will influence aircraft operations and consequent airline costs and benefits is presented. Specific modeling issues and elements necessary for accurate assessment of all costs affected by ownership and operation of FTFCS are delineated. Trade-off factors are presented, aimed at exposing economically optimal realizations of system implementations, resource allocation, and operating policies. A trade-off example is furnished to graphically display some of the analysis capabilities of the comprehensive simulation model now being developed.

  1. An improved wrapper-based feature selection method for machinery fault diagnosis

    PubMed Central

    2017-01-01

    A major issue of machinery fault diagnosis using vibration signals is that it is over-reliant on personnel knowledge and experience in interpreting the signal. Thus, machine learning has been adapted for machinery fault diagnosis. The quantity and quality of the input features, however, influence the fault classification performance. Feature selection plays a vital role in selecting the most representative feature subset for the machine learning algorithm. In contrast, the trade-off relationship between capability when selecting the best feature subset and computational effort is inevitable in the wrapper-based feature selection (WFS) method. This paper proposes an improved WFS technique before integration with a support vector machine (SVM) model classifier as a complete fault diagnosis system for a rolling element bearing case study. The bearing vibration dataset made available by the Case Western Reserve University Bearing Data Centre was executed using the proposed WFS and its performance has been analysed and discussed. The results reveal that the proposed WFS secures the best feature subset with a lower computational effort by eliminating the redundancy of re-evaluation. The proposed WFS has therefore been found to be capable and efficient to carry out feature selection tasks. PMID:29261689

  2. Automation and robotics - Key to productivity. [in industry and space

    NASA Technical Reports Server (NTRS)

    Cohen, A.

    1985-01-01

    The automated and robotic systems requirements of the NASA Space Station are prompted by maintenance, repair, servicing and assembly requirements. Trend analyses, fault diagnoses, and subsystem status assessments for the Station's electrical power, guidance, navigation, control, data management and environmental control subsystems will be undertaken by cybernetic expert systems; this will reduce or eliminate on-board or ground facility activities that would otherwise be essential, enhancing system productivity. Additional capabilities may also be obtained through the incorporation of even a limited amount of artificial intelligence in the controllers of the various Space Station systems.

  3. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  4. Study of fault-tolerant software technology

    NASA Technical Reports Server (NTRS)

    Slivinski, T.; Broglio, C.; Wild, C.; Goldberg, J.; Levitt, K.; Hitt, E.; Webb, J.

    1984-01-01

    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance.

  5. Seismic images and fault relations of the Santa Monica thrust fault, West Los Angeles, California

    USGS Publications Warehouse

    Catchings, R.D.; Gandhok, G.; Goldman, M.R.; Okaya, D.

    2001-01-01

    In May 1997, the US Geological Survey (USGS) and the University of Southern California (USC) acquired high-resolution seismic reflection and refraction images on the grounds of the Wadsworth Veterans Administration Hospital (WVAH) in the city of Los Angeles (Fig. 1a,b). The objective of the seismic survey was to better understand the near-surface geometry and faulting characteristics of the Santa Monica fault zone. In this report, we present seismic images, an interpretation of those images, and a comparison of our results with results from studies by Dolan and Pratt (1997), Pratt et al. (1998) and Gibbs et al. (2000). The Santa Monica fault is one of the several northeast-southwest-trending, north-dipping, reverse faults that extend through the Los Angeles metropolitan area (Fig. 1a). Through much of area, the Santa Monica fault trends subparallel to the Hollywood fault, but the two faults apparently join into a single fault zone to the southwest and to the northeast (Dolan et al., 1995). The Santa Monica and Hollywood faults may be part of a larger fault system that extends from the Pacific Ocean to the Transverse Ranges. Crook et al. (1983) refer to this fault system as the Malibu Coast-Santa Monica-Raymond-Cucamonga fault system. They suggest that these faults have not formed a contiguous zone since the Pleistocene and conclude that each of the faults should be treated as a separate fault with respect to seismic hazards. However, Dolan et al. (1995) suggest that the Hollywood and Santa Monica faults are capable of generating Mw 6.8 and Mw 7.0 earthquakes, respectively. Thus, regardless of whether the overall fault system is connected and capable of rupturing in one event, individually, each of the faults present a sizable earthquake hazard to the Los Angeles metropolitan area. If, however, these faults are connected, and they were to rupture along a continuous fault rupture, the resulting hazard would be even greater. Although the Santa Monica fault represents a hazard to millions of people, its lateral extent and rupture history are not well known, due largely to limited knowledge of the fault location, geometry, and relationship to other faults. The Santa Monica fault has been obscured at the surface by alluvium and urbanization. For example, Dolan et al. (1995) could find only one 200-m-long stretch of the Santa Monica fault that was not covered by either streets or buildings. Of the 19-km length onshore section of the Santa Monica fault, its apparent location has been delineated largely on the basis of geomorphic features and oil-well drilling. Seismic imaging efforts, in combination with other investigative methods, may be the best approach in locating and understanding the Santa Monica fault in the Los Angeles region. This investigation and another recent seismic imaging investigation (Pratt et al., 1998) were undertaken to resolve the near-surface location, fault geometry, and faulting relations associated with the Santa Monica fault.

  6. Real-Time Projection to Verify Plan Success During Execution

    NASA Technical Reports Server (NTRS)

    Wagner, David A.; Dvorak, Daniel L.; Rasmussen, Robert D.; Knight, Russell L.; Morris, John R.; Bennett, Matthew B.; Ingham, Michel D.

    2012-01-01

    The Mission Data System provides a framework for modeling complex systems in terms of system behaviors and goals that express intent. Complex activity plans can be represented as goal networks that express the coordination of goals on different state variables of the system. Real-time projection extends the ability of this system to verify plan achievability (all goals can be satisfied over the entire plan) into the execution domain so that the system is able to continuously re-verify a plan as it is executed, and as the states of the system change in response to goals and the environment. Previous versions were able to detect and respond to goal violations when they actually occur during execution. This new capability enables the prediction of future goal failures; specifically, goals that were previously found to be achievable but are no longer achievable due to unanticipated faults or environmental conditions. Early detection of such situations enables operators or an autonomous fault response capability to deal with the problem at a point that maximizes the available options. For example, this system has been applied to the problem of managing battery energy on a lunar rover as it is used to explore the Moon. Astronauts drive the rover to waypoints and conduct science observations according to a plan that is scheduled and verified to be achievable with the energy resources available. As the astronauts execute this plan, the system uses this new capability to continuously re-verify the plan as energy is consumed to ensure that the battery will never be depleted below safe levels across the entire plan.

  7. Autonomous spacecraft maintenance study group

    NASA Technical Reports Server (NTRS)

    Marshall, M. H.; Low, G. D.

    1981-01-01

    A plan to incorporate autonomous spacecraft maintenance (ASM) capabilities into Air Force spacecraft by 1989 is outlined. It includes the successful operation of the spacecraft without ground operator intervention for extended periods of time. Mechanisms, along with a fault tolerant data processing system (including a nonvolatile backup memory) and an autonomous navigation capability, are needed to replace the routine servicing that is presently performed by the ground system. The state of the art fault handling capabilities of various spacecraft and computers are described, and a set conceptual design requirements needed to achieve ASM is established. Implementations for near term technology development needed for an ASM proof of concept demonstration by 1985, and a research agenda addressing long range academic research for an advanced ASM system for 1990s are established.

  8. A fault-tolerant control architecture for unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Drozeski, Graham R.

    Research has presented several approaches to achieve varying degrees of fault-tolerance in unmanned aircraft. Approaches in reconfigurable flight control are generally divided into two categories: those which incorporate multiple non-adaptive controllers and switch between them based on the output of a fault detection and identification element, and those that employ a single adaptive controller capable of compensating for a variety of fault modes. Regardless of the approach for reconfigurable flight control, certain fault modes dictate system restructuring in order to prevent a catastrophic failure. System restructuring enables active control of actuation not employed by the nominal system to recover controllability of the aircraft. After system restructuring, continued operation requires the generation of flight paths that adhere to an altered flight envelope. The control architecture developed in this research employs a multi-tiered hierarchy to allow unmanned aircraft to generate and track safe flight paths despite the occurrence of potentially catastrophic faults. The hierarchical architecture increases the level of autonomy of the system by integrating five functionalities with the baseline system: fault detection and identification, active system restructuring, reconfigurable flight control; reconfigurable path planning, and mission adaptation. Fault detection and identification algorithms continually monitor aircraft performance and issue fault declarations. When the severity of a fault exceeds the capability of the baseline flight controller, active system restructuring expands the controllability of the aircraft using unconventional control strategies not exploited by the baseline controller. Each of the reconfigurable flight controllers and the baseline controller employ a proven adaptive neural network control strategy. A reconfigurable path planner employs an adaptive model of the vehicle to re-shape the desired flight path. Generation of the revised flight path is posed as a linear program constrained by the response of the degraded system. Finally, a mission adaptation component estimates limitations on the closed-loop performance of the aircraft and adjusts the aircraft mission accordingly. A combination of simulation and flight test results using two unmanned helicopters validates the utility of the hierarchical architecture.

  9. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  10. Large transient fault current test of an electrical roll ring

    NASA Technical Reports Server (NTRS)

    Yenni, Edward J.; Birchenough, Arthur G.

    1992-01-01

    The space station uses precision rotary gimbals to provide for sun tracking of its photoelectric arrays. Electrical power, command signals and data are transferred across the gimbals by roll rings. Roll rings have been shown to be capable of highly efficient electrical transmission and long life, through tests conducted at the NASA Lewis Research Center and Honeywell's Satellite and Space Systems Division in Phoenix, AZ. Large potential fault currents inherent to the power system's DC distribution architecture, have brought about the need to evaluate the effects of large transient fault currents on roll rings. A test recently conducted at Lewis subjected a roll ring to a simulated worst case space station electrical fault. The system model used to obtain the fault profile is described, along with details of the reduced order circuit that was used to simulate the fault. Test results comparing roll ring performance before and after the fault are also presented.

  11. Design of on-board parallel computer on nano-satellite

    NASA Astrophysics Data System (ADS)

    You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li

    2007-11-01

    This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.

  12. The Buffer Diagnostic Prototype: A fault isolation application using CLIPS

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1994-01-01

    This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.

  13. Research and technology goals and objectives for Integrated Vehicle Health Management (IVHM)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Integrated Vehicle Health Management (IVHM) is defined herein as the capability to efficiently perform checkout, testing, and monitoring of space transportation vehicles, subsystems, and components before, during, and after operational This includes the ability to perform timely status determination, diagnostics, and prognostics. IVHM must support fault-tolerant response including system/subsystem reconfiguration to prevent catastrophic failures; and IVHM must support the planning and scheduling of post-operational maintenance. The purpose of this document is to establish the rationale for IVHM and IVHM research and technology planning, and to develop technical goals and objectives. This document is prepared to provide a broad overview of IVHM for technology and advanced development activities and, more specifically, to provide a planning reference from an avionics viewpoint under the OAST Transportation Technology Program Strategic Plan.

  14. Survivable algorithms and redundancy management in NASA's distributed computing systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw

    1992-01-01

    The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.

  15. Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems

    NASA Technical Reports Server (NTRS)

    Walker, M.; Figueroa, F.

    2015-01-01

    The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.

  16. An Investigation of Network Enterprise Risk Management Techniques to Support Military Net-Centric Operations

    DTIC Science & Technology

    2009-09-01

    this information supports the decison - making process as it is applied to the management of risk. 2. Operational Risk Operational risk is the threat... reasonability . However, to make a software system fault tolerant, the system needs to recognize and fix a system state condition. To detect a fault, a fault...Tracking ..........................................51 C. DECISION- MAKING PROCESS................................................................51 1. Risk

  17. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  18. SIRU development. Volume 3: Software description and program documentation

    NASA Technical Reports Server (NTRS)

    Oehrle, J.

    1973-01-01

    The development and initial evaluation of a strapdown inertial reference unit (SIRU) system are discussed. The SIRU configuration is a modular inertial subsystem with hardware and software features that achieve fault tolerant operational capabilities. The SIRU redundant hardware design is formulated about a six gyro and six accelerometer instrument module package. The six axes array provides redundant independent sensing and the symmetry enables the formulation of an optimal software redundant data processing structure with self-contained fault detection and isolation (FDI) capabilities. The basic SIRU software coding system used in the DDP-516 computer is documented.

  19. Fuzzy logic based on-line fault detection and classification in transmission line.

    PubMed

    Adhikari, Shuma; Sinha, Nidul; Dorendrajit, Thingam

    2016-01-01

    This study presents fuzzy logic based online fault detection and classification of transmission line using Programmable Automation and Control technology based National Instrument Compact Reconfigurable i/o (CRIO) devices. The LabVIEW software combined with CRIO can perform real time data acquisition of transmission line. When fault occurs in the system current waveforms are distorted due to transients and their pattern changes according to the type of fault in the system. The three phase alternating current, zero sequence and positive sequence current data generated by LabVIEW through CRIO-9067 are processed directly for relaying. The result shows that proposed technique is capable of right tripping action and classification of type of fault at high speed therefore can be employed in practical application.

  20. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  1. An architecture for the development of real-time fault diagnosis systems using model-based reasoning

    NASA Technical Reports Server (NTRS)

    Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday

    1992-01-01

    Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.

  2. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  3. Breaking down barriers in cooperative fault management: Temporal and functional information displays

    NASA Technical Reports Server (NTRS)

    Potter, Scott S.; Woods, David D.

    1994-01-01

    At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.

  4. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  5. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  6. Recording Plate Boundary Deformation Processes Around The San Jacinto Fault, California

    NASA Astrophysics Data System (ADS)

    Hodgkinson, K.; Mencin, D.; Borsa, A.; Fox, O.; Walls, C.; Van Boskirk, E.

    2012-04-01

    The San Jacinto Fault is one of the major faults which form the San Andreas Fault System in southern California. The fault, which lies to the west of the San Andreas, is one of the most active in the region. While strain rates are higher along the San Andreas, 23-37 mm/yr compared to 12-22 mm/yr along the San Jacinto, there have been 11 earthquakes of M6 and greater along the San Jacinto in the past 150 years while there have been none of this magnitude on the San Andreas in this region. UNAVCO has installed an array of geodetic and seismic instruments along the San Jacinto as part of the Plate Boundary Observatory (PBO). The network includes 25 GPS stations within 20 km of the surface trace with a concentration of borehole instrumentation in the Anza region where there are nine boreholes sites. Most of the borehole sites contain a GTSM21 4-component strainmeter, a Sonde-2 seismometer, a MEMS accelerometer and a pore pressure sensor. Thus, the array has the capability to capture plate boundary deformation processes with periods of milliseconds (seismic) to decades (GPS). On July 7th 2010 a M5.4 earthquake occurred on the Coyote Creek segment of the fault. The event was preceded by a M4.9 earthquake in the same area four weeks earlier and four earthquakes of M5 and greater within a 20 km radius of the epicenter in the past 50 years. In this study we will present the signals recorded by the different instrument types for the July 7th 2010 event and will compare the coseismic displacements recorded by the GPS and strainmeters with the displacement field predicted by Okada [1992]. All data recorded as part of the PBO observatory are publically available from the UNAVCO, the IRIS Data Management Center and the Northern California Earthquake Data Center.

  7. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  8. A graphical language for reliability model generation

    NASA Technical Reports Server (NTRS)

    Howell, Sandra V.; Bavuso, Salvatore J.; Haley, Pamela J.

    1990-01-01

    A graphical interface capability of the hybrid automated reliability predictor (HARP) is described. The graphics-oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault tree gates, including sequence dependency gates, or by a Markov chain. With this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the Graphical Kernel System (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing.

  9. Hypothetical Scenario Generator for Fault-Tolerant Diagnosis

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    The Hypothetical Scenario Generator for Fault-tolerant Diagnostics (HSG) is an algorithm being developed in conjunction with other components of artificial- intelligence systems for automated diagnosis and prognosis of faults in spacecraft, aircraft, and other complex engineering systems. By incorporating prognostic capabilities along with advanced diagnostic capabilities, these developments hold promise to increase the safety and affordability of the affected engineering systems by making it possible to obtain timely and accurate information on the statuses of the systems and predicting impending failures well in advance. The HSG is a specific instance of a hypothetical- scenario generator that implements an innovative approach for performing diagnostic reasoning when data are missing. The special purpose served by the HSG is to (1) look for all possible ways in which the present state of the engineering system can be mapped with respect to a given model and (2) generate a prioritized set of future possible states and the scenarios of which they are parts.

  10. Designing and validating the joint battlespace infosphere

    NASA Astrophysics Data System (ADS)

    Peterson, Gregory D.; Alexander, W. Perry; Birdwell, J. Douglas

    2001-08-01

    Fielding and managing the dynamic, complex information systems infrastructure necessary for defense operations presents significant opportunities for revolutionary improvements in capabilities. An example of this technology trend is the creation and validation of the Joint Battlespace Infosphere (JBI) being developed by the Air Force Research Lab. The JBI is a system of systems that integrates, aggregates, and distributes information to users at all echelons, from the command center to the battlefield. The JBI is a key enabler of meeting the Air Force's Joint Vision 2010 core competencies such as Information Superiority, by providing increased situational awareness, planning capabilities, and dynamic execution. At the same time, creating this new operational environment introduces significant risk due to an increased dependency on computational and communications infrastructure combined with more sophisticated and frequent threats. Hence, the challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.

  11. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  12. Efficient Conduct of Individual Flights and Air Traffic or Optimum Utilization of Modern Technology for the Overall Benefit of Civil and Military Airspace Users. Conference Proceedings of the Symposium of the Guidance and Control Panel (42nd) Held in Brussels, Belgium on 10-13 June 1986.

    DTIC Science & Technology

    1986-12-01

    subjects such as: - the need to have reliable systems which will be "fault-tolerant’ - the man/machine relationship ; - compatibility between systems. 8. THE...be worked out and that acceptable solutions can be found as regards the man/ machine relationship . It will also be necessary to resolve the problems...management functions of the system should be essentially ground-based. 9. Capacity for coping with demands. 10. ATIM capability and relationship with

  13. The need for artificial intelligence as an aid in controlling a manufacturing operation

    NASA Astrophysics Data System (ADS)

    Weyand, J.

    AI applications to industrial production and planning are discussed and illustrated with diagrams and drawings. Applications examined include flexible automation of manufacturing processes (robots with open manual control, robots programmable to meet product specifications, self-regulated robots, and robots capable of learning), flexible fault detection and diagnostics, production control, and overall planning and management (product strategies, marketing, determination of development capacity, site selection, project organization, and technology investment strategies). For the case of robots, problems in the design and operation of a state-of-the-art machine-tool cell (for hole boring, milling, and joining) are analyzed in detail.

  14. Detection of arcing location on photovoltaic systems using filters

    DOEpatents

    Johnson, Jay

    2018-02-20

    The present invention relates to photovoltaic systems capable of identifying the location of an arc-fault. In particular, such systems include a unique filter connected to each photovoltaic (PV) string, thereby providing a unique filtered noise profile associated with a particular PV string. Also described herein are methods for identifying and isolating such arc-faults.

  15. Planetary Gearbox Fault Detection Using Vibration Separation Techniques

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; LaBerge, Kelsen E.; Ehinger, Ryan T.; Fetty, Jason

    2011-01-01

    Studies were performed to demonstrate the capability to detect planetary gear and bearing faults in helicopter main-rotor transmissions. The work supported the Operations Support and Sustainment (OSST) program with the U.S. Army Aviation Applied Technology Directorate (AATD) and Bell Helicopter Textron. Vibration data from the OH-58C planetary system were collected on a healthy transmission as well as with various seeded-fault components. Planetary fault detection algorithms were used with the collected data to evaluate fault detection effectiveness. Planet gear tooth cracks and spalls were detectable using the vibration separation techniques. Sun gear tooth cracks were not discernibly detectable from the vibration separation process. Sun gear tooth spall defects were detectable. Ring gear tooth cracks were only clearly detectable by accelerometers located near the crack location or directly across from the crack. Enveloping provided an effective method for planet bearing inner- and outer-race spalling fault detection.

  16. Fault-tolerant building-block computer study

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    Ultra-reliable core computers are required for improving the reliability of complex military systems. Such computers can provide reliable fault diagnosis, failure circumvention, and, in some cases serve as an automated repairman for their host systems. A small set of building-block circuits which can be implemented as single very large integration devices, and which can be used with off-the-shelf microprocessors and memories to build self checking computer modules (SCCM) is described. Each SCCM is a microcomputer which is capable of detecting its own faults during normal operation and is described to communicate with other identical modules over one or more Mil Standard 1553A buses. Several SCCMs can be connected into a network with backup spares to provide fault-tolerant operation, i.e. automated recovery from faults. Alternative fault-tolerant SCCM configurations are discussed along with the cost and reliability associated with their implementation.

  17. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the software community. This paper discusses the findings and TR suite informing the FM domain in best practices for FM architectural design, visibility observations, and methods employed for IV&V and mission assurance.

  18. Avionic Air Data Sensors Fault Detection and Isolation by means of Singular Perturbation and Geometric Approach

    PubMed Central

    2017-01-01

    Singular Perturbations represent an advantageous theory to deal with systems characterized by a two-time scale separation, such as the longitudinal dynamics of aircraft which are called phugoid and short period. In this work, the combination of the NonLinear Geometric Approach and the Singular Perturbations leads to an innovative Fault Detection and Isolation system dedicated to the isolation of faults affecting the air data system of a general aviation aircraft. The isolation capabilities, obtained by means of the approach proposed in this work, allow for the solution of a fault isolation problem otherwise not solvable by means of standard geometric techniques. Extensive Monte-Carlo simulations, exploiting a high fidelity aircraft simulator, show the effectiveness of the proposed Fault Detection and Isolation system. PMID:28946673

  19. Knowledge representation requirements for model sharing between model-based reasoning and simulation in process flow domains

    NASA Technical Reports Server (NTRS)

    Throop, David R.

    1992-01-01

    The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.

  20. Late Neogene slip transfer and extension within the curved Whisky Flat fault system central Walker Lane, west-central Nevada

    NASA Astrophysics Data System (ADS)

    Biholar, Alexander Kenneth Casian

    In Whisky Flat of west-central Nevada, northwest-striking faults in the Walker Lane curve to east-northeast orientations at the northern limits of the Mina deflection. This curve in strike results in the formation of ˜685 m deep depression bounded by north-south convex to the east range-front faults that at the apex of fault curvature are bisected at a high angle by a structural stepover. We use the vertical offset of a late Miocene erosional surface mapped in the highlands and inferred from gravity depth inversion in the basin to measure the magnitude of displacement on faults. A N65°W extensional axis determined through fault-slip inversion is used to constrain the direction in displacement models. Through the use of a forward rectilinear displacement model, we document that the complex array of faults is capable of developing with broadly contemporaneous displacements on all structures since the opening of the basin during the Pliocene.

  1. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.

    2012-01-01

    Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.

  2. Operations management system advanced automation: Fault detection isolation and recovery prototyping

    NASA Technical Reports Server (NTRS)

    Hanson, Matt

    1990-01-01

    The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.

  3. ARGES: an Expert System for Fault Diagnosis Within Space-Based ECLS Systems

    NASA Technical Reports Server (NTRS)

    Pachura, David W.; Suleiman, Salem A.; Mendler, Andrew P.

    1988-01-01

    ARGES (Atmospheric Revitalization Group Expert System) is a demonstration prototype expert system for fault management for the Solid Amine, Water Desorbed (SAWD) CO2 removal assembly, associated with the Environmental Control and Life Support (ECLS) System. ARGES monitors and reduces data in real time from either the SAWD controller or a simulation of the SAWD assembly. It can detect gradual degradations or predict failures. This allows graceful shutdown and scheduled maintenance, which reduces crew maintenance overhead. Status and fault information is presented in a user interface that simulates what would be seen by a crewperson. The user interface employs animated color graphics and an object oriented approach to provide detailed status information, fault identification, and explanation of reasoning in a rapidly assimulated manner. In addition, ARGES recommends possible courses of action for predicted and actual faults. ARGES is seen as a forerunner of AI-based fault management systems for manned space systems.

  4. Enhancing the LVRT Capability of PMSG-Based Wind Turbines Based on R-SFCL

    NASA Astrophysics Data System (ADS)

    Xu, Lin; Lin, Ruixing; Ding, Lijie; Huang, Chunjun

    2018-03-01

    A novel low voltage ride-through (LVRT) scheme for PMSG-based wind turbines based on the Resistor Superconducting Fault Current Limiter (R-SFCL) is proposed in this paper. The LVRT scheme is mainly formed by R-SFCL in series between the transformer and the Grid Side Converter (GSC), and basic modelling has been discussed in detail. The proposed LVRT scheme is implemented to interact with PMSG model in PSCAD/EMTDC under three phase short circuit fault condition, which proves that the proposed scheme based on R-SFCL can improve the transient performance and LVRT capability to consolidate grid connection with wind turbines.

  5. Flight test of a full authority Digital Electronic Engine Control system in an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Barrett, W. J.; Rembold, J. P.; Burcham, F. W.; Myers, L.

    1981-01-01

    The Digital Electronic Engine Control (DEEC) system considered is a relatively low cost digital full authority control system containing selectively redundant components and fault detection logic with capability for accommodating faults to various levels of operational capability. The DEEC digital control system is built around a 16-bit, 1.2 microsecond cycle time, CMOS microprocessor, microcomputer system with approximately 14 K of available memory. Attention is given to the control mode, component bench testing, closed loop bench testing, a failure mode and effects analysis, sea-level engine testing, simulated altitude engine testing, flight testing, the data system, cockpit, and real time display.

  6. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  7. Health Management Applications for International Space Station

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Duncavage, Dan

    2005-01-01

    Traditional mission and vehicle management involves teams of highly trained specialists monitoring vehicle status and crew activities, responding rapidly to any anomalies encountered during operations. These teams work from the Mission Control Center and have access to engineering support teams with specialized expertise in International Space Station (ISS) subsystems. Integrated System Health Management (ISHM) applications can significantly augment these capabilities by providing enhanced monitoring, prognostic and diagnostic tools for critical decision support and mission management. The Intelligent Systems Division of NASA Ames Research Center is developing many prototype applications using model-based reasoning, data mining and simulation, working with Mission Control through the ISHM Testbed and Prototypes Project. This paper will briefly describe information technology that supports current mission management practice, and will extend this to a vision for future mission control workflow incorporating new ISHM applications. It will describe ISHM applications currently under development at NASA and will define technical approaches for implementing our vision of future human exploration mission management incorporating artificial intelligence and distributed web service architectures using specific examples. Several prototypes are under development, each highlighting a different computational approach. The ISStrider application allows in-depth analysis of Caution and Warning (C&W) events by correlating real-time telemetry with the logical fault trees used to define off-nominal events. The application uses live telemetry data and the Livingstone diagnostic inference engine to display the specific parameters and fault trees that generated the C&W event, allowing a flight controller to identify the root cause of the event from thousands of possibilities by simply navigating animated fault tree models on their workstation. SimStation models the functional power flow for the ISS Electrical Power System and can predict power balance for nominal and off-nominal conditions. SimStation uses realtime telemetry data to keep detailed computational physics models synchronized with actual ISS power system state. In the event of failure, the application can then rapidly diagnose root cause, predict future resource levels and even correlate technical documents relevant to the specific failure. These advanced computational models will allow better insight and more precise control of ISS subsystems, increasing safety margins by speeding up anomaly resolution and reducing,engineering team effort and cost. This technology will make operating ISS more efficient and is directly applicable to next-generation exploration missions and Crew Exploration Vehicles.

  8. Formal design and verification of a reliable computing platform for real-time control. Phase 2: Results

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Divito, Ben L.

    1992-01-01

    The design and formal verification of the Reliable Computing Platform (RCP), a fault tolerant computing system for digital flight control applications is presented. The RCP uses N-Multiply Redundant (NMR) style redundancy to mask faults and internal majority voting to flush the effects of transient faults. The system is formally specified and verified using the Ehdm verification system. A major goal of this work is to provide the system with significant capability to withstand the effects of High Intensity Radiated Fields (HIRF).

  9. New methods for the condition monitoring of level crossings

    NASA Astrophysics Data System (ADS)

    García Márquez, Fausto Pedro; Pedregal, Diego J.; Roberts, Clive

    2015-04-01

    Level crossings represent a high risk for railway systems. This paper demonstrates the potential to improve maintenance management through the use of intelligent condition monitoring coupled with reliability centred maintenance (RCM). RCM combines advanced electronics, control, computing and communication technologies to address the multiple objectives of cost effectiveness, improved quality, reliability and services. RCM collects digital and analogue signals utilising distributed transducers connected to either point-to-point or digital bus communication links. Assets in many industries use data logging capable of providing post-failure diagnostic support, but to date little use has been made of combined qualitative and quantitative fault detection techniques. The research takes the hydraulic railway level crossing barrier (LCB) system as a case study and develops a generic strategy for failure analysis, data acquisition and incipient fault detection. For each barrier the hydraulic characteristics, the motor's current and voltage, hydraulic pressure and the barrier's position are acquired. In order to acquire the data at a central point efficiently, without errors, a distributed single-cable Fieldbus is utilised. This allows the connection of all sensors through the project's proprietary communication nodes to a high-speed bus. The system developed in this paper for the condition monitoring described above detects faults by means of comparing what can be considered a 'normal' or 'expected' shape of a signal with respect to the actual shape observed as new data become available. ARIMA (autoregressive integrated moving average) models were employed for detecting faults. The statistical tests known as Jarque-Bera and Ljung-Box have been considered for testing the model.

  10. Geological indicators of a suspected seismic source from Peninsular India

    NASA Astrophysics Data System (ADS)

    Singh, Yogendra; John, Biju; P, Ganapathy G.; S, Divyalakshmi K.

    2014-05-01

    An increase in seismicity in Peninsular India during the last few decades has initiated various studies for identifying seismogenic structures and their behaviour. Even though few earthquakes occurred at well defined structures many of them occurred at unexpected locations where no previous seismicity reported. However, studies subsequent to the 1993 Latur earthquake as well as the studies at different parts of peninsular India, have led to the identification of pre-existing faults that have activated in the past. Studies elsewhere in the cratonic hinderland also show that the damaging earthquakes occur on pre-existing faults with a recurrence period of tens of thousands of year Studies subsequent to 1989 Wadakkancheri earthquake (M=4.3) identified Desamangalam fault which are capable of generating earthquakes. However, it is noted that a number of later events are occurring much south of the Desamangalam fault. We identified a set of NW-SE trending lineaments which are influencing the drainage pattern of the area. A network of paleochannels is also observed in the remote sensing analysis and field studies in this area. Regionally these lineaments meeting one of the major lineaments in central Kerala called Periyar lineament, in the south. Charnockite rocks constitutes the major rock type of the region. These rocks at places developed strong foliation similar to the lineament direction. Detailed field studies identified oblique movement (reverse and strike slip component) along NW-SE trending faults which are dipping south-west. The studies also find NNE-SSW trending vertical faults showing strike-slip movement. The damage zones of each of these faults bears different mineral precipitations and gouge injections of episodic nature. The presence of loose gouge may indicate the faulting is a much later development in the brittle regime. The sense of movement of the observed faults may indicate that the various river/drainage abandonment observed in the area are due to the movement of these faults. The correlation of the ongoing earthquake activity with these faults and their sense of movement akin to the present stress condition of Peninsular India and its episodic nature as well as its influence on the drainage network of the area may indicate that these faults may be adjusting to the present tectonic regime and are capable of producing moderate events. Key words Peninsular India, stress regime, lineaments, brittle deformation

  11. Simulating spontaneous aseismic and seismic slip events on evolving faults

    NASA Astrophysics Data System (ADS)

    Herrendörfer, Robert; van Dinther, Ylona; Pranger, Casper; Gerya, Taras

    2017-04-01

    Plate motion along tectonic boundaries is accommodated by different slip modes: steady creep, seismic slip and slow slip transients. Due to mainly indirect observations and difficulties to scale results from laboratory experiments to nature, it remains enigmatic which fault conditions favour certain slip modes. Therefore, we are developing a numerical modelling approach that is capable of simulating different slip modes together with the long-term fault evolution in a large-scale tectonic setting. We extend the 2D, continuum mechanics-based, visco-elasto-plastic thermo-mechanical model that was designed to simulate slip transients in large-scale geodynamic simulations (van Dinther et al., JGR, 2013). We improve the numerical approach to accurately treat the non-linear problem of plasticity (see also EGU 2017 abstract by Pranger et al.). To resolve a wide slip rate spectrum on evolving faults, we develop an invariant reformulation of the conventional rate-and-state dependent friction (RSF) and adapt the time step (Lapusta et al., JGR, 2000). A crucial part of this development is a conceptual ductile fault zone model that relates slip rates along discrete planes to the effective macroscopic plastic strain rates in the continuum. We test our implementation first in a simple 2D setup with a single fault zone that has a predefined initial thickness. Results show that deformation localizes in case of steady creep and for very slow slip transients to a bell-shaped strain rate profile across the fault zone, which suggests that a length scale across the fault zone may exist. This continuum length scale would overcome the common mesh-dependency in plasticity simulations and question the conventional treatment of aseismic slip on infinitely thin fault zones. We test the introduction of a diffusion term (similar to the damage description in Lyakhovsky et al., JMPS, 2011) into the state evolution equation and its effect on (de-)localization during faster slip events. We compare the slip spectrum in our simulations to conventional RSF simulations (Liu and Rice, JGR, 2007). We further demonstrate the capability of simulating the evolution of a fault zone and simultaneous occurrence of slip transients. From small random initial distributions of the state variable in an otherwise homogeneous medium, deformation localizes and forms curved zones of reduced states. These spontaneously formed fault zones host slip transients, which in turn contribute to the growth of the fault zone.

  12. Technologies for unattended network operations

    NASA Technical Reports Server (NTRS)

    Jaworski, Allan; Odubiyi, Jide; Holdridge, Mark; Zuzek, John

    1991-01-01

    The necessary network management functions for a telecommunications, navigation and information management (TNIM) system in the framework of an extension of the ISO model for communications network management are described. Various technologies that could substantially reduce the need for TNIM network management, automate manpower intensive functions, and deal with synchronization and control at interplanetary distances are presented. Specific technologies addressed include the use of the ISO Common Management Interface Protocol, distributed artificial intelligence for network synchronization and fault management, and fault-tolerant systems engineering.

  13. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.

    PubMed

    Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto

    2017-09-29

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  14. Optimal Management of Redundant Control Authority for Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.

  15. Mobile Robot Lab Project to Introduce Engineering Students to Fault Diagnosis in Mechatronic Systems

    ERIC Educational Resources Information Center

    Gómez-de-Gabriel, Jesús Manuel; Mandow, Anthony; Fernández-Lozano, Jesús; García-Cerezo, Alfonso

    2015-01-01

    This paper proposes lab work for learning fault detection and diagnosis (FDD) in mechatronic systems. These skills are important for engineering education because FDD is a key capability of competitive processes and products. The intended outcome of the lab work is that students become aware of the importance of faulty conditions and learn to…

  16. Autonomous Satellite Command and Control Through the World Wide Web. Phase 3

    NASA Technical Reports Server (NTRS)

    Cantwell, Brian; Twiggs, Robert

    1998-01-01

    The Automated Space System Experimental Testbed (ASSET) system is a simple yet comprehensive real-world operations network being developed. Phase 3 of the ASSET Project was January-December 1997 and is the subject of this report. This phase permitted SSDL and its project partners to expand the ASSET system in a variety of ways. These added capabilities included the advancement of ground station capabilities, the adaptation of spacecraft on-board software, and the expansion of capabilities of the ASSET management algorithms. Specific goals of Phase 3 were: (1) Extend Web-based goal-level commanding for both the payload PI and the spacecraft engineer. (2) Support prioritized handling of multiple (PIs) Principle Investigators as well as associated payload experimenters. (3) Expand the number and types of experiments supported by the ASSET system and its associated spacecraft. (4) Implement more advanced resource management, modeling and fault management capabilities that integrate the space and ground segments of the space system hardware. (5) Implement a beacon monitoring test. (6) Implement an experimental blackboard controller for space system management. (7) Further define typical ground station developments required for Internet-based remote control and for full system automation of the PI-to-spacecraft link. Each of those goals are examined. Significant sections of this report were also published as a conference paper. Several publications produced in support of this grant are included as attachments. Titles include: 1) Experimental Initiatives in Space System Operations; 2) The ASSET Client Interface: Balancing High Level Specification with Low Level Control; 3) Specifying Spacecraft Operations At The Product/Service Level; 4) The Design of a Highly Configurable, Reusable Operating System for Testbed Satellites; 5) Automated Health Operations For The Sapphire Spacecraft; 6) Engineering Data Summaries for Space Missions; and 7) Experiments In Automated Health Assessment And Notification For The Sapphire Microsatellite.

  17. A multidisciplinary approach to characterize the geometry of active faults: the example of Mt. Massico, Southern Italy

    NASA Astrophysics Data System (ADS)

    Luiso, P.; Paoletti, V.; Nappi, R.; La Manna, M.; Cella, F.; Gaudiosi, G.; Fedi, M.; Iorio, M.

    2018-06-01

    We present the results of a multidisciplinary and multiscale study at Mt. Massico, Southern Italy. Mt. Massico is a carbonate horst located along the Campanian-Latial margin of the Tyrrhenian basin, bordered by two main NE-SW systems of faults, and by NW-SE and N-S trending faults. Our analysis deals with the modelling of the main NE-SW faults. These faults were capable during Plio-Pleistocene and are still active today, even though with scarce and low-energy seismicity (Mw maximum = 4.8). We inferred the pattern of the fault planes through a combined interpretation of 2-D hypocentral sections, a multiscale analysis of gravity field and geochemical data. This allowed us to characterize the geometry of these faults and infer their large depth extent. This region shows very striking gravimetric signatures, well-known Quaternary faults, moderate seismicity and a localized geothermal fluid rise. Thus, this analysis represents a valid case study for testing the effectiveness of a multidisciplinary approach, and employing it in areas with buried and/or silent faults of potential high hazard, such as in the Apennine chain.

  18. FAULT PROPAGATION AND EFFECTS ANALYSIS FOR DESIGNING AN ONLINE MONITORING SYSTEM FOR THE SECONDARY LOOP OF A NUCLEAR POWER PLANT PART OF A HYBRID ENERGY SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Huijuan; Diao, Xiaoxu; Li, Boyuan

    This paper studies the propagation and effects of faults of critical components that pertain to the secondary loop of a nuclear power plant found in Nuclear Hybrid Energy Systems (NHES). This information is used to design an on-line monitoring (OLM) system which is capable of detecting and forecasting faults that are likely to occur during NHES operation. In this research, the causes, features, and effects of possible faults are investigated by simulating the propagation of faults in the secondary loop. The simulation is accomplished by using the Integrated System Failure Analysis (ISFA). ISFA is used for analyzing hardware and softwaremore » faults during the conceptual design phase. In this paper, the models of system components required by ISFA are initially constructed. Then, the fault propagation analysis is implemented, which is conducted under the bounds set by acceptance criteria derived from the design of an OLM system. The result of the fault simulation is utilized to build a database for fault detection and diagnosis, provide preventive measures, and propose an optimization plan for the OLM system.« less

  19. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  20. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  1. Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    2014-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe themore » distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  2. Comparative analysis of neural network and regression based condition monitoring approaches for wind turbine fault detection

    NASA Astrophysics Data System (ADS)

    Schlechtingen, Meik; Ferreira Santos, Ilmar

    2011-07-01

    This paper presents the research results of a comparison of three different model based approaches for wind turbine fault detection in online SCADA data, by applying developed models to five real measured faults and anomalies. The regression based model as the simplest approach to build a normal behavior model is compared to two artificial neural network based approaches, which are a full signal reconstruction and an autoregressive normal behavior model. Based on a real time series containing two generator bearing damages the capabilities of identifying the incipient fault prior to the actual failure are investigated. The period after the first bearing damage is used to develop the three normal behavior models. The developed or trained models are used to investigate how the second damage manifests in the prediction error. Furthermore the full signal reconstruction and the autoregressive approach are applied to further real time series containing gearbox bearing damages and stator temperature anomalies. The comparison revealed all three models being capable of detecting incipient faults. However, they differ in the effort required for model development and the remaining operational time after first indication of damage. The general nonlinear neural network approaches outperform the regression model. The remaining seasonality in the regression model prediction error makes it difficult to detect abnormality and leads to increased alarm levels and thus a shorter remaining operational period. For the bearing damages and the stator anomalies under investigation the full signal reconstruction neural network gave the best fault visibility and thus led to the highest confidence level.

  3. Hybrid automated reliability predictor integrated work station (HiREL)

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1991-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.

  4. Current Fault Management Trends in NASA's Planetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.

    2009-01-01

    The key product of this three-day workshop is a NASA White Paper that documents lessons learned from previous missions, recommended best practices, and future opportunities for investments in the fault management domain. This paper summarizes the findings and recommendations that are captured in the White Paper.

  5. Nonlinear dynamic failure process of tunnel-fault system in response to strong seismic event

    NASA Astrophysics Data System (ADS)

    Yang, Zhihua; Lan, Hengxing; Zhang, Yongshuang; Gao, Xing; Li, Langping

    2013-03-01

    Strong earthquakes and faults have significant effect on the stability capability of underground tunnel structures. This study used a 3-Dimensional Discrete Element model and the real records of ground motion in the Wenchuan earthquake to investigate the dynamic response of tunnel-fault system. The typical tunnel-fault system was composed of one planned railway tunnel and one seismically active fault. The discrete numerical model was prudentially calibrated by means of the comparison between the field survey and numerical results of ground motion. It was then used to examine the detailed quantitative information on the dynamic response characteristics of tunnel-fault system, including stress distribution, strain, vibration velocity and tunnel failure process. The intensive tunnel-fault interaction during seismic loading induces the dramatic stress redistribution and stress concentration in the intersection of tunnel and fault. The tunnel-fault system behavior is characterized by the complicated nonlinear dynamic failure process in response to a real strong seismic event. It can be qualitatively divided into 5 main stages in terms of its stress, strain and rupturing behaviors: (1) strain localization, (2) rupture initiation, (3) rupture acceleration, (4) spontaneous rupture growth and (5) stabilization. This study provides the insight into the further stability estimation of underground tunnel structures under the combined effect of strong earthquakes and faults.

  6. Lessons Learned in the Livingstone 2 on Earth Observing One Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Shulman, Seth

    2005-01-01

    The Livingstone 2 (L2) model-based diagnosis software is a reusable diagnostic tool for monitoring complex systems. In 2004, L2 was integrated with the JPL Autonomous Sciencecraft Experiment (ASE) and deployed on-board Goddard's Earth Observing One (EO-1) remote sensing satellite, to monitor and diagnose the EO-1 space science instruments and imaging sequence. This paper reports on lessons learned from this flight experiment. The goals for this experiment, including validation of minimum success criteria and of a series of diagnostic scenarios, have all been successfully net. Long-term operations in space are on-going, as a test of the maturity of the system, with L2 performance remaining flawless. L2 has demonstrated the ability to track the state of the system during nominal operations, detect simulated abnormalities in operations and isolate failures to their root cause fault. Specific advances demonstrated include diagnosis of ambiguity groups rather than a single fault candidate; hypothesis revision given new sensor evidence about the state of the system; and the capability to check for faults in a dynamic system without having to wait until the system is quiescent. The major benefits of this advanced health management technology are to increase mission duration and reliability through intelligent fault protection, and robust autonomous operations with reduced dependency on supervisory operations from Earth. The work-load for operators will be reduced by telemetry of processed state-of-health information rather than raw data. The long-term vision is that of making diagnosis available to the onboard planner or executive, allowing autonomy software to re-plan in order to work around known component failures. For a system that is expected to evolve substantially over its lifetime, as for the International Space Station, the model-based approach has definite advantages over rule-based expert systems and limit-checking fault protection systems, as these do not scale well. The model-based approach facilitates reuse of the L2 diagnostic software; only the model of the system to be diagnosed and telemetry monitoring software has to be rebuilt for a new system or expanded for a growing system. The hierarchical L2 model supports modularity and expendability, and as such is suitable solution for integrated system health management as envisioned for systems-of-systems.

  7. Fleet-Wide Prognostic and Health Management Suite: Asset Fault Signature Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: (1) Diagnostic Advisor, (2) Asset Fault Signature (AFS) Database, (3) Remaining Useful Life Advisor, and (4) Remaining Useful Life Database. The paper focuses on the AFS Database of the FW-PHM Suite, which is used to catalog asset fault signatures. A fault signature is a structured representation ofmore » the information that an expert would use to first detect and then verify the occurrence of a specific type of fault. The fault signatures developed to assess the health status of generator step-up transformers are described in the paper. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  8. Coordinated Fault Tolerance for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  9. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.

  10. Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results

    NASA Technical Reports Server (NTRS)

    Glass, B. J. (Editor)

    1992-01-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  11. Thermal Expert System (TEXSYS): Systems automony demonstration project, volume 1. Overview

    NASA Technical Reports Server (NTRS)

    Glass, B. J. (Editor)

    1992-01-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS test bed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  12. Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results

    NASA Astrophysics Data System (ADS)

    Glass, B. J.

    1992-10-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  13. Intelligent fault-tolerant controllers

    NASA Technical Reports Server (NTRS)

    Huang, Chien Y.

    1987-01-01

    A system with fault tolerant controls is one that can detect, isolate, and estimate failures and perform necessary control reconfiguration based on this new information. Artificial intelligence (AI) is concerned with semantic processing, and it has evolved to include the topics of expert systems and machine learning. This research represents an attempt to apply AI to fault tolerant controls, hence, the name intelligent fault tolerant control (IFTC). A generic solution to the problem is sought, providing a system based on logic in addition to analytical tools, and offering machine learning capabilities. The advantages are that redundant system specific algorithms are no longer needed, that reasonableness is used to quickly choose the correct control strategy, and that the system can adapt to new situations by learning about its effects on system dynamics.

  14. Hybrid routing technique for a fault-tolerant, integrated information network

    NASA Technical Reports Server (NTRS)

    Meredith, B. D.

    1986-01-01

    The evolutionary growth of the space station and the diverse activities onboard are expected to require a hierarchy of integrated, local area networks capable of supporting data, voice, and video communications. In addition, fault-tolerant network operation is necessary to protect communications between critical systems attached to the net and to relieve the valuable human resources onboard the space station of time-critical data system repair tasks. A key issue for the design of the fault-tolerant, integrated network is the development of a robust routing algorithm which dynamically selects the optimum communication paths through the net. A routing technique is described that adapts to topological changes in the network to support fault-tolerant operation and system evolvability.

  15. A Classfication of Management Teachers

    ERIC Educational Resources Information Center

    Walker, Bob

    1974-01-01

    There are many classifications of management teachers today. Each has his style, successes, and faults. Some of the more prominent are: the company man, the mamagement technician, the man of principle, the evangelist, and the entrepreneur. A mixture of these classifications would be ideal since each by itself has its faults. (DS)

  16. Sensor Data Qualification System (SDQS) Implementation Study

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Melcher, Kevin; Fulton, Christopher; Maul, William

    2009-01-01

    The Sensor Data Qualification System (SDQS) is being developed to provide a sensor fault detection capability for NASA s next-generation launch vehicles. In addition to traditional data qualification techniques (such as limit checks, rate-of-change checks and hardware redundancy checks), SDQS can provide augmented capability through additional techniques that exploit analytical redundancy relationships to enable faster and more sensitive sensor fault detection. This paper documents the results of a study that was conducted to determine the best approach for implementing a SDQS network configuration that spans multiple subsystems, similar to those that may be implemented on future vehicles. The best approach is defined as one that most minimizes computational resource requirements without impacting the detection of sensor failures.

  17. Ground Software Maintenance Facility (GSMF) system manual

    NASA Technical Reports Server (NTRS)

    Derrig, D.; Griffith, G.

    1986-01-01

    The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.

  18. Sedimentary evidence of historical and prehistorical earthquakes along the Venta de Bravo Fault System, Acambay Graben (Central Mexico)

    NASA Astrophysics Data System (ADS)

    Lacan, Pierre; Ortuño, María; Audin, Laurence; Perea, Hector; Baize, Stephane; Aguirre-Díaz, Gerardo; Zúñiga, F. Ramón

    2018-03-01

    The Venta de Bravo normal fault is one of the longest structures in the intra-arc fault system of the Trans-Mexican Volcanic Belt. It defines, together with the Pastores Fault, the 80 km long southern margin of the Acambay Graben. We focus on the westernmost segment of the Venta de Bravo Fault and provide new paleoseismological information, evaluate its earthquake history, and assess the related seismic hazard. We analyzed five trenches, distributed at three different sites, in which Holocene surface faulting offsets interbedded volcanoclastic, fluvio-lacustrine and colluvial deposits. Despite the lack of known historical destructive earthquakes along this fault, we found evidence of at least eight earthquakes during the late Quaternary. Our results indicate that this is one of the major seismic sources of the Acambay Graben, capable of producing by itself earthquakes with magnitudes (MW) up to 6.9, with a slip rate of 0.22-0.24 mm yr- 1 and a recurrence interval between 1940 and 2390 years. In addition, a possible multi-fault rupture of the Venta de Bravo Fault together with other faults of the Acambay Graben could result in a MW > 7 earthquake. These new slip rates, earthquake recurrence rates, and estimation of slips per event help advance our understanding of the seismic hazard posed by the Venta de Bravo Fault and provide new parameters for further hazard assessment.

  19. Reconfigurable tree architectures using subtree oriented fault tolerance

    NASA Technical Reports Server (NTRS)

    Lowrie, Matthew B.

    1987-01-01

    An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.

  20. IDA/OSD(Institute for Defense Analyses/Office of the Secretary of Defense) Reliability and Maintainability Study. Volume 3. Case Study Analysis.

    DTIC Science & Technology

    1983-11-01

    Isolation FIT Fault Isolation Test FMC Fully Mission Capable FMEA Failure Modes and Effects Analysis FMECA Failure Modes, Effects and Criticality...8217 ..* ,/- " , " A ’’"""¢ 9 % ’ k " . " ~ .[:, .- v," . , , .’ % , o4,o 100 92 88 0 80 76 PERCENT -- __ OF F-16 60 FLIGHTS WITHOUT NATO NATO RADAR HILL BASE BASE...to manage a growth program adequately. 822/1-13 IV-13 %. - 70 60 F-ill 50 F-1 5 40- CL. C.c 30 - IF-1 S20 ICIC p 10 I~* -. -. - - -____ F-5E -4 -3 -2

  1. SIRU utilization. Volume 2: Software description and program documentation

    NASA Technical Reports Server (NTRS)

    Oehrle, J.; Whittredge, R.

    1973-01-01

    A complete description of the additional analysis, development and evaluation provided for the SIRU system as identified in the requirements for the SIRU utilization program is presented. The SIRU configuration is a modular inertial subsystem with hardware and software features that achieve fault tolerant operational capabilities. The SIRU redundant hardware design is formulated about a six gyro and six accelerometer instrument module package. The modules are mounted in this package so that their measurement input axes form a unique symmetrical pattern that corresponds to the array of perpendiculars to the faces of a regular dodecahedron. This six axes array provides redundant independent sensing and the symmetry enables the formulation of an optimal software redundant data processing structure with self-contained fault detection and isolation (FDI) capabilities. Documentation of the additional software and software modifications required to implement the utilization capabilities includes assembly listings and flow charts

  2. Design and Analysis of Linear Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Xu, Liang; Liu, Guohai; Du, Yi; Liu, Hu

    2014-01-01

    This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis. PMID:24982959

  3. MgB2-based superconductors for fault current limiters

    NASA Astrophysics Data System (ADS)

    Sokolovsky, V.; Prikhna, T.; Meerovich, V.; Eisterer, M.; Goldacker, W.; Kozyrev, A.; Weber, H. W.; Shapovalov, A.; Sverdun, V.; Moshchil, V.

    2017-02-01

    A promising solution of the fault current problem in power systems is the application of fast-operating nonlinear superconducting fault current limiters (SFCLs) with the capability of rapidly increasing their impedance, and thus limiting high fault currents. We report the results of experiments with models of inductive (transformer type) SFCLs based on the ring-shaped bulk MgB2 prepared under high quasihydrostatic pressure (2 GPa) and by hot pressing technique (30 MPa). It was shown that the SFCLs meet the main requirements to fault current limiters: they possess low impedance in the nominal regime of the protected circuit and can fast increase their impedance limiting both the transient and the steady-state fault currents. The study of quenching currents of MgB2 rings (SFCL activation current) and AC losses in the rings shows that the quenching current density and critical current density determined from AC losses can be 10-20 times less than the critical current determined from the magnetization experiments.

  4. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  5. Design and analysis of linear fault-tolerant permanent-magnet vernier machines.

    PubMed

    Xu, Liang; Ji, Jinghua; Liu, Guohai; Du, Yi; Liu, Hu

    2014-01-01

    This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis.

  6. Fault tolerance in computational grids: perspectives, challenges, and issues.

    PubMed

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  7. SPANNER: A Self-Repairing Spiking Neural Network Hardware Architecture.

    PubMed

    Liu, Junxiu; Harkin, Jim; Maguire, Liam P; McDaid, Liam J; Wade, John J

    2018-04-01

    Recent research has shown that a glial cell of astrocyte underpins a self-repair mechanism in the human brain, where spiking neurons provide direct and indirect feedbacks to presynaptic terminals. These feedbacks modulate the synaptic transmission probability of release (PR). When synaptic faults occur, the neuron becomes silent or near silent due to the low PR of synapses; whereby the PRs of remaining healthy synapses are then increased by the indirect feedback from the astrocyte cell. In this paper, a novel hardware architecture of Self-rePAiring spiking Neural NEtwoRk (SPANNER) is proposed, which mimics this self-repairing capability in the human brain. This paper demonstrates that the hardware can self-detect and self-repair synaptic faults without the conventional components for the fault detection and fault repairing. Experimental results show that SPANNER can maintain the system performance with fault densities of up to 40%, and more importantly SPANNER has only a 20% performance degradation when the self-repairing architecture is significantly damaged at a fault density of 80%.

  8. Investigating an API for resilient exascale computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stearley, Jon R.; Tomkins, James; VanDyke, John P.

    2013-05-01

    Increased HPC capability comes with increased complexity, part counts, and fault occurrences. In- creasing the resilience of systems and applications to faults is a critical requirement facing the viability of exascale systems, as the overhead of traditional checkpoint/restart is projected to outweigh its bene ts due to fault rates outpacing I/O bandwidths. As faults occur and propagate throughout hardware and software layers, pervasive noti cation and handling mechanisms are necessary. This report describes an initial investigation of fault types and programming interfaces to mitigate them. Proof-of-concept APIs are presented for the frequent and important cases of memory errors and nodemore » failures, and a strategy proposed for lesystem failures. These involve changes to the operating system, runtime, I/O library, and application layers. While a single API for fault handling among hardware and OS and application system-wide remains elusive, the e ort increased our understanding of both the mountainous challenges and the promising trailheads. 3« less

  9. A study of the relationship between the performance and dependability of a fault-tolerant computer

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.

    1994-01-01

    This thesis studies the relationship by creating a tool (FTAPE) that integrates a high stress workload generator with fault injection and by using the tool to evaluate system performance under error conditions. The workloads are comprised of processes which are formed from atomic components that represent CPU, memory, and I/O activity. The fault injector is software-implemented and is capable of injecting any memory addressable location, including special registers and caches. This tool has been used to study a Tandem Integrity S2 Computer. Workloads with varying numbers of processes and varying compositions of CPU, memory, and I/O activity are first characterized in terms of performance. Then faults are injected into these workloads. The results show that as the number of concurrent processes increases, the mean fault latency initially increases due to increased contention for the CPU. However, for even higher numbers of processes (less than 3 processes), the mean latency decreases because long latency faults are paged out before they can be activated.

  10. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  11. A mechanical model of the San Andreas fault and SAFOD Pilot Hole stress measurements

    USGS Publications Warehouse

    Chery, J.; Zoback, M.D.; Hickman, S.

    2004-01-01

    Stress measurements made in the SAFOD pilot hole provide an opportunity to study the relation between crustal stress outside the fault zone and the stress state within it using an integrated mechanical model of a transform fault loaded in transpression. The results of this modeling indicate that only a fault model in which the effective friction is very low (<0.1) through the seismogenic thickness of the crust is capable of matching stress measurements made in both the far field and in the SAFOD pilot hole. The stress rotation measured with depth in the SAFOD pilot hole (???28??) appears to be a typical feature of a weak fault embedded in a strong crust and a weak upper mantle with laterally variable heat flow, although our best model predicts less rotation (15??) than observed. Stress magnitudes predicted by our model within the fault zone indicate low shear stress on planes parallel to the fault but a very anomalous mean stress, approximately twice the lithostatic stress. Copyright 2004 by the American Geophysical Union.

  12. Upstream vertical cavity surface-emitting lasers for fault monitoring and localization in WDM passive optical networks

    NASA Astrophysics Data System (ADS)

    Wong, Elaine; Zhao, Xiaoxue; Chang-Hasnain, Connie J.

    2008-04-01

    As wavelength division multiplexed passive optical networks (WDM-PONs) are expected to be first deployed to transport high capacity services to business customers, real-time knowledge of fiber/device faults and the location of such faults will be a necessity to guarantee reliability. Nonetheless, the added benefit of implementing fault monitoring capability should only incur minimal cost associated with upgrades to the network. In this work, we propose and experimentally demonstrate a fault monitoring and localization scheme based on a highly-sensitive and potentially low-cost monitor in conjunction with vertical cavity surface-emitting lasers (VCSELs). The VCSELs are used as upstream transmitters in the WDM-PON. The proposed scheme benefits from the high reflectivity of the top distributed Bragg reflector (DBR) mirror of optical injection-locked (OIL) VCSELs to reflect monitoring channels back to the central office for monitoring. Characterization of the fault monitor demonstrates high sensitivity, low bandwidth requirements, and potentially low output power. The added advantage of the proposed fault monitoring scheme incurs only a 0.5 dB penalty on the upstream transmissions on the existing infrastructure.

  13. Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments

    PubMed Central

    Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel

    2011-01-01

    There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593

  14. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.

    PubMed

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.

  15. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System

    PubMed Central

    Yuan, Xianfeng; Song, Mumin; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526

  16. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  17. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  18. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  19. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Results of a Space Station Data System Analysis/Architecture Study for the Goddard Space Flight Center are presented. This study, which emphasized a system engineering design for a complete, end-to-end data system, was divided into six tasks: (1); Functional requirements definition; (2) Options development; (3) Trade studies; (4) System definitions; (5) Program plan; and (6) Study maintenance. The Task inter-relationship and documentation flow are described. Information in volume 2 is devoted to Task 3: trade Studies. Trade Studies have been carried out in the following areas: (1) software development test and integration capability; (2) fault tolerant computing; (3) space qualified computers; (4) distributed data base management system; (5) system integration test and verification; (6) crew workstations; (7) mass storage; (8) command and resource management; and (9) space communications. Results are presented for each task.

  20. NASA Electric Aircraft Test Bed (NEAT) Development Plan - Design, Fabrication, Installation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2016-01-01

    As large airline companies compete to reduce emissions, fuel, noise, and maintenance costs, it is expected that more of their aircraft systems will shift from using turbofan propulsion, pneumatic bleed power, and hydraulic actuation, to instead using electrical motor propulsion, generator power, and electrical actuation. This requires new flight-weight and flight-efficient powertrain components, fault tolerant power management, and electromagnetic interference mitigation technologies. Moreover, initial studies indicate some combination of ambient and cryogenic thermal management and relatively high bus voltages when compared to state of practice will be required to achieve a net system benefit. Developing all these powertrain technologies within a realistic aircraft architectural geometry and under realistic operational conditions requires a unique electric aircraft testbed. This report will summarize existing testbed capabilities located in the U.S. and details the development of a unique complementary testbed that industry and government can utilize to further mature electric aircraft technologies.

  1. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    NASA Technical Reports Server (NTRS)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  2. Plate Margin Deformation and Active Tectonics Along the Northern Edge of the Yakutat Terrane in the Saint Elias Orogen, Alaska and Yukon, Canada

    NASA Technical Reports Server (NTRS)

    Bruhn, Ronald L.; Sauber, Jeanne; Cotton, Michele M.; Pavlis, Terry L.; Burgess, Evan; Ruppert, Natalia; Forster, Richard R.

    2012-01-01

    The northwest directed motion of the Pacific plate is accompanied by migration and collision of the Yakutat terrane into the cusp of southern Alaska. The nature and magnitude of accretion and translation on upper crustal faults and folds is poorly constrained, however, due to pervasive glaciation. In this study we used high-resolution topography, geodetic imaging, seismic, and geologic data to advance understanding of the transition from strike-slip motion on the Fairweather fault to plate margin deformation on the Bagley fault, which cuts through the upper plate of the collisional suture above the subduction megathrust. The Fairweather fault terminates by oblique-extensional splay faulting within a structural syntaxis, allowing rapid tectonic upwelling of rocks driven by thrust faulting and crustal contraction. Plate motion is partly transferred from the Fairweather to the Bagley fault, which extends 125 km farther west as a dextral shear zone that is partly reactivated by reverse faulting. The Bagley fault dips steeply through the upper plate to intersect the subduction megathrust at depth, forming a narrow fault-bounded crustal sliver in the obliquely convergent plate margin. Since . 20 Ma the Bagley fault has accommodated more than 50 km of dextral shearing and several kilometers of reverse motion along its southern flank during terrane accretion. The fault is considered capable of generating earthquakes because it is linked to faults that generated large historic earthquakes, suitably oriented for reactivation in the contemporary stress field, and locally marked by seismicity. The fault may generate earthquakes of Mw <= 7.5.

  3. Interactive Retro-Deformation of Terrain for Reconstructing 3D Fault Displacements.

    PubMed

    Westerteiger, R; Compton, T; Bernadin, T; Cowgill, E; Gwinner, K; Hamann, B; Gerndt, A; Hagen, H

    2012-12-01

    Planetary topography is the result of complex interactions between geological processes, of which faulting is a prominent component. Surface-rupturing earthquakes cut and move landforms which develop across active faults, producing characteristic surface displacements across the fault. Geometric models of faults and their associated surface displacements are commonly applied to reconstruct these offsets to enable interpretation of the observed topography. However, current 2D techniques are limited in their capability to convey both the three-dimensional kinematics of faulting and the incremental sequence of events required by a given reconstruction. Here we present a real-time system for interactive retro-deformation of faulted topography to enable reconstruction of fault displacement within a high-resolution (sub 1m/pixel) 3D terrain visualization. We employ geometry shaders on the GPU to intersect the surface mesh with fault-segments interactively specified by the user and transform the resulting surface blocks in realtime according to a kinematic model of fault motion. Our method facilitates a human-in-the-loop approach to reconstruction of fault displacements by providing instant visual feedback while exploring the parameter space. Thus, scientists can evaluate the validity of traditional point-to-point reconstructions by visually examining a smooth interpolation of the displacement in 3D. We show the efficacy of our approach by using it to reconstruct segments of the San Andreas fault, California as well as a graben structure in the Noctis Labyrinthus region on Mars.

  4. Model-Based Diagnostics for Propellant Loading Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.

    2011-01-01

    The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.

  5. Fiber Bragg grating sensor for fault detection in high voltage overhead transmission lines

    NASA Astrophysics Data System (ADS)

    Moghadas, Amin

    2011-12-01

    A fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by fiber Bragg grating (FBG) sensors. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signals. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG sensors and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.

  6. Sensor Fault Detection and Diagnosis Simulation of a Helicopter Engine in an Intelligent Control Framework

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet

    1994-01-01

    This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.

  7. Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines

    PubMed Central

    Moghadas, Amin A.; Shadaram, Mehdi

    2010-01-01

    In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416

  8. The 2015 M w 6.0 Mt. Kinabalu earthquake: an infrequent fault rupture within the Crocker fault system of East Malaysia

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Wei, Shengji; Wang, Xin; Lindsey, Eric O.; Tongkul, Felix; Tapponnier, Paul; Bradley, Kyle; Chan, Chung-Han; Hill, Emma M.; Sieh, Kerry

    2017-12-01

    The M w 6.0 Mt. Kinabalu earthquake of 2015 was a complete (and deadly) surprise, because it occurred well away from the nearest plate boundary in a region of very low historical seismicity. Our seismological, space geodetic, geomorphological, and field investigations show that the earthquake resulted from rupture of a northwest-dipping normal fault that did not reach the surface. Its unilateral rupture was almost directly beneath 4000-m-high Mt. Kinabalu and triggered widespread slope failures on steep mountainous slopes, which included rockfalls that killed 18 hikers. Our seismological and morphotectonic analyses suggest that the rupture occurred on a normal fault that splays upwards off of the previously identified normal Marakau fault. Our mapping of tectonic landforms reveals that these faults are part of a 200-km-long system of normal faults that traverse the eastern side of the Crocker Range, parallel to Sabah's northwestern coastline. Although the tectonic reason for this active normal fault system remains unclear, the lengths of the longest fault segments suggest that they are capable of generating magnitude 7 earthquakes. Such large earthquakes must occur very rarely, though, given the hitherto undetectable geodetic rates of active tectonic deformation across the region.

  9. A systematic risk management approach employed on the CloudSat project

    NASA Technical Reports Server (NTRS)

    Basilio, R. R.; Plourde, K. S.; Lam, T.

    2000-01-01

    The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.

  10. Automatic Fault Characterization via Abnormality-Enhanced Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  11. Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1

    DTIC Science & Technology

    2016-03-01

    REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The

  12. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    NASA Astrophysics Data System (ADS)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  13. Smart built-in test

    NASA Technical Reports Server (NTRS)

    Richards, Dale W.

    1990-01-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  14. Smart built-in test

    NASA Astrophysics Data System (ADS)

    Richards, Dale W.

    1990-03-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  15. Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)

    NASA Technical Reports Server (NTRS)

    Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.

    2004-01-01

    I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.

  16. Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications

    NASA Astrophysics Data System (ADS)

    Nasir, Ali

    Spacecraft operate in a harsh environment, are costly to launch, and experience unavoidable communication delay and bandwidth constraints. These factors motivate the need for effective onboard mission and fault management. This dissertation presents an integrated framework to optimize science goal achievement while identifying and managing encountered faults. Goal-related tasks are defined by pointing the spacecraft instrumentation toward distant targets of scientific interest. The relative value of science data collection is traded with risk of failures to determine an optimal policy for mission execution. Our major innovation in fault detection and reconfiguration is to incorporate fault information obtained from two types of spacecraft models: one based on the dynamics of the spacecraft and the second based on the internal composition of the spacecraft. For fault reconfiguration, we consider possible changes in both dynamics-based control law configuration and the composition-based switching configuration. We formulate our problem as a stochastic sequential decision problem or Markov Decision Process (MDP). To avoid the computational complexity involved in a fully-integrated MDP, we decompose our problem into multiple MDPs. These MDPs include planning MDPs for different fault scenarios, a fault detection MDP based on a logic-based model of spacecraft component and system functionality, an MDP for resolving conflicts between fault information from the logic-based model and the dynamics-based spacecraft models" and the reconfiguration MDP that generates a policy optimized over the relative importance of the mission objectives versus spacecraft safety. Approximate Dynamic Programming (ADP) methods for the decomposition of the planning and fault detection MDPs are applied. To show the performance of the MDP-based frameworks and ADP methods, a suite of spacecraft attitude planning case studies are described. These case studies are used to analyze the content and behavior of computed policies in response to the changes in design parameters. A primary case study is built from the Far Ultraviolet Spectroscopic Explorer (FUSE) mission for which component models and their probabilities of failure are based on realistic mission data. A comparison of our approach with an alternative framework for spacecraft task planning and fault management is presented in the context of the FUSE mission.

  17. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    PubMed

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  18. Fault Diagnosis for the Heat Exchanger of the Aircraft Environmental Control System Based on the Strong Tracking Filter

    PubMed Central

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system’s efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger. PMID:25823010

  19. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    PubMed

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  20. Advanced cloud fault tolerance system

    NASA Astrophysics Data System (ADS)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  1. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  2. Intelligent fault isolation and diagnosis for communication satellite systems

    NASA Technical Reports Server (NTRS)

    Tallo, Donald P.; Durkin, John; Petrik, Edward J.

    1992-01-01

    Discussed here is a prototype diagnosis expert system to provide the Advanced Communication Technology Satellite (ACTS) System with autonomous diagnosis capability. The system, the Fault Isolation and Diagnosis EXpert (FIDEX) system, is a frame-based system that uses hierarchical structures to represent such items as the satellite's subsystems, components, sensors, and fault states. This overall frame architecture integrates the hierarchical structures into a lattice that provides a flexible representation scheme and facilitates system maintenance. FIDEX uses an inexact reasoning technique based on the incrementally acquired evidence approach developed by Shortliffe. The system is designed with a primitive learning ability through which it maintains a record of past diagnosis studies.

  3. Influence of slot number and pole number in fault-tolerant brushless dc motors having unequal tooth widths

    NASA Astrophysics Data System (ADS)

    Ishak, D.; Zhu, Z. Q.; Howe, D.

    2005-05-01

    The electromagnetic performance of fault-tolerant three-phase permanent magnet brushless dc motors, in which the wound teeth are wider than the unwound teeth and their tooth tips span approximately one pole pitch and which have similar numbers of slots and poles, is investigated. It is shown that they have a more trapezoidal phase back-emf wave form, a higher torque capability, and a lower torque ripple than similar fault-tolerant machines with equal tooth widths. However, these benefits gradually diminish as the pole number is increased, due to the effect of interpole leakage flux.

  4. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  5. Simple random sampling-based probe station selection for fault detection in wireless sensor networks.

    PubMed

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.

  6. Product Support Manager Guidebook

    DTIC Science & Technology

    2011-04-01

    package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA

  7. V&V of Fault Management: Challenges and Successes

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Costello, Ken; Ohi, Don; Lu, Tiffany; Newhouse, Marilyn

    2013-01-01

    This paper describes the results of a special breakout session of the NASA Independent Verification and Validation (IV&V) Workshop held in the fall of 2012 entitled "V&V of Fault Management: Challenges and Successes." The NASA IV&V Program is in a unique position to interact with projects across all of the NASA development domains. Using this unique opportunity, the IV&V program convened a breakout session to enable IV&V teams to share their challenges and successes with respect to the V&V of Fault Management (FM) architectures and software. The presentations and discussions provided practical examples of pitfalls encountered while performing V&V of FM including the lack of consistent designs for implementing faults monitors and the fact that FM information is not centralized but scattered among many diverse project artifacts. The discussions also solidified the need for an early commitment to developing FM in parallel with the spacecraft systems as well as clearly defining FM terminology within a project.

  8. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less

  9. Portable Automated Test Station: Using Engineering-Design Partnerships to Replace Obsolete Test Systems

    DTIC Science & Technology

    2015-04-01

    troubleshooting avionics system faults while the aircraft is on the ground. The core component of the PATS-30, the ruggedized laptop, is no longer sustainable...as well as trouble shooting avionics system faults while the aircraft is on the ground. The PATS-70 utilizes up-to-date, sustainable technology for...Operational Flight Program (OFP) software loading and diagnostic avionics system testing and includes additional TPSs to enhance its capability

  10. Digital electronic engine control fault detection and accommodation flight evaluation

    NASA Technical Reports Server (NTRS)

    Baer-Ruedhart, J. L.

    1984-01-01

    The capabilities and performance of various fault detection and accommodation (FDA) schemes in existing and projected engine control systems were investigated. Flight tests of the digital electronic engine control (DEEC) in an F-15 aircraft show discrepancies between flight results and predictions based on simulation and altitude testing. The FDA methodology and logic in the DEEC system, and the results of the flight failures which occurred to date are described.

  11. Mid-crustal detachment and ramp faulting in the Markham Valley, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Stevens, C.; McCaffrey, R.; Silver, E. A.; Sombo, Z.; English, P.; van der Kevie, J.

    1998-09-01

    Earthquakes and geodetic evidence reveal the presence of a low-angle, mid-crustal detachment fault beneath the Finisterre Range that connects to a steep ramp surfacing near the Ramu-Markham Valley of Papua New Guinea. Waveforms of three large (Mw 6.3 to 6.9) thrust earthquakes that occurred in October 1993 beneath the Finisterre Range 10 to 30 km north of the valley reveal 15° north-dipping thrusts at about 20 km depth. Global Positioning System measurements show up to 20 cm of coseismic slip occurred across the valley, requiring that the active fault extend to within a few hundred meters of the Earth's surface beneath the Markham Valley. Together, these data imply that a gently north-dipping thrust fault in the middle or lower crust beneath the Finisterre Range steepens and shallows southward, forming a ramp fault beneath the north side of the Markham Valley. Waveforms indicate that both the ramp and detachment fault were active during at least one of the earthquakes. While the seismic potential of mid-crustal detachments elsewhere is debated, in Papua New Guinea the detachment fault shows the capability of producing large earthquakes.

  12. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    PubMed

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  13. Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks

    PubMed Central

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774

  14. Impact of sea-level rise on earthquake and landslide triggering offshore the Alentejo margin (SW Iberia)

    NASA Astrophysics Data System (ADS)

    Neves, M. C.; Roque, C.; Luttrell, K. M.; Vázquez, J. T.; Alonso, B.

    2016-12-01

    Earthquakes and submarine landslides are recurrent and widespread manifestations of fault activity offshore SW Iberia. The present work tests the effects of sea-level rise on offshore fault systems using Coulomb stress change calculations across the Alentejo margin. Large-scale faults capable of generating large earthquakes and tsunamis in the region, especially NE-SW trending thrusts and WNW-ESE trending dextral strike-slip faults imaged at basement depths, are either blocked or unaffected by flexural effects related to sea-level changes. Large-magnitude earthquakes occurring along these structures may, therefore, be less frequent during periods of sea-level rise. In contrast, sea-level rise promotes shallow fault ruptures within the sedimentary sequence along the continental slope and upper rise within distances of <100 km from the coast. The results suggest that the occurrence of continental slope failures may either increase (if triggered by shallow fault ruptures) or decrease (if triggered by deep fault ruptures) as a result of sea-level rise. Moreover, observations of slope failures affecting the area of the Sines contourite drift highlight the role of sediment properties as preconditioning factors in this region.

  15. A wideband magnetoresistive sensor for monitoring dynamic fault slip in laboratory fault friction experiments

    USGS Publications Warehouse

    Kilgore, Brian D.

    2017-01-01

    A non-contact, wideband method of sensing dynamic fault slip in laboratory geophysical experiments employs an inexpensive magnetoresistive sensor, a small neodymium rare earth magnet, and user built application-specific wideband signal conditioning. The magnetoresistive sensor generates a voltage proportional to the changing angles of magnetic flux lines, generated by differential motion or rotation of the near-by magnet, through the sensor. The performance of an array of these sensors compares favorably to other conventional position sensing methods employed at multiple locations along a 2 m long × 0.4 m deep laboratory strike-slip fault. For these magnetoresistive sensors, the lack of resonance signals commonly encountered with cantilever-type position sensor mounting, the wide band response (DC to ≈ 100 kHz) that exceeds the capabilities of many traditional position sensors, and the small space required on the sample, make them attractive options for capturing high speed fault slip measurements in these laboratory experiments. An unanticipated observation of this study is the apparent sensitivity of this sensor to high frequency electomagnetic signals associated with fault rupture and (or) rupture propagation, which may offer new insights into the physics of earthquake faulting.

  16. Havens: Explicit Reliable Memory Regions for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Engelmann, Christian

    2016-01-01

    Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less

  17. Reducing a Knowledge-Base Search Space When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  18. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  19. VLSI Implementation of Fault Tolerance Multiplier based on Reversible Logic Gate

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Hakimi Mokhtar, Ahmad; Othman, Nurmiza binti; Fhong Soon, Chin; Rahman, Ab Al Hadi Ab

    2017-08-01

    Multiplier is one of the essential component in the digital world such as in digital signal processing, microprocessor, quantum computing and widely used in arithmetic unit. Due to the complexity of the multiplier, tendency of errors are very high. This paper aimed to design a 2×2 bit Fault Tolerance Multiplier based on Reversible logic gate with low power consumption and high performance. This design have been implemented using 90nm Complemetary Metal Oxide Semiconductor (CMOS) technology in Synopsys Electronic Design Automation (EDA) Tools. Implementation of the multiplier architecture is by using the reversible logic gates. The fault tolerance multiplier used the combination of three reversible logic gate which are Double Feynman gate (F2G), New Fault Tolerance (NFT) gate and Islam Gate (IG) with the area of 160μm x 420.3μm (67.25 mm2). This design achieved a low power consumption of 122.85μW and propagation delay of 16.99ns. The fault tolerance multiplier proposed achieved a low power consumption and high performance which suitable for application of modern computing as it has a fault tolerance capabilities.

  20. Advanced power system protection and incipient fault detection and protection of spaceborne power systems

    NASA Technical Reports Server (NTRS)

    Russell, B. Don

    1989-01-01

    This research concentrated on the application of advanced signal processing, expert system, and digital technologies for the detection and control of low grade, incipient faults on spaceborne power systems. The researchers have considerable experience in the application of advanced digital technologies and the protection of terrestrial power systems. This experience was used in the current contracts to develop new approaches for protecting the electrical distribution system in spaceborne applications. The project was divided into three distinct areas: (1) investigate the applicability of fault detection algorithms developed for terrestrial power systems to the detection of faults in spaceborne systems; (2) investigate the digital hardware and architectures required to monitor and control spaceborne power systems with full capability to implement new detection and diagnostic algorithms; and (3) develop a real-time expert operating system for implementing diagnostic and protection algorithms. Significant progress has been made in each of the above areas. Several terrestrial fault detection algorithms were modified to better adapt to spaceborne power system environments. Several digital architectures were developed and evaluated in light of the fault detection algorithms.

  1. Modeling Crustal Deformation Due to the Landers, Hector Mine Earthquakes Using the SCEC Community Fault Model

    NASA Astrophysics Data System (ADS)

    Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.

    2006-12-01

    More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.

  2. Tools for Evaluating Fault Detection and Diagnostic Methods for HVAC Secondary Systems

    NASA Astrophysics Data System (ADS)

    Pourarian, Shokouh

    Although modern buildings are using increasingly sophisticated energy management and control systems that have tremendous control and monitoring capabilities, building systems routinely fail to perform as designed. More advanced building control, operation, and automated fault detection and diagnosis (AFDD) technologies are needed to achieve the goal of net-zero energy commercial buildings. Much effort has been devoted to develop such technologies for primary heating ventilating and air conditioning (HVAC) systems, and some secondary systems. However, secondary systems, such as fan coil units and dual duct systems, although widely used in commercial, industrial, and multifamily residential buildings, have received very little attention. This research study aims at developing tools that could provide simulation capabilities to develop and evaluate advanced control, operation, and AFDD technologies for these less studied secondary systems. In this study, HVACSIM+ is selected as the simulation environment. Besides developing dynamic models for the above-mentioned secondary systems, two other issues related to the HVACSIM+ environment are also investigated. One issue is the nonlinear equation solver used in HVACSIM+ (Powell's Hybrid method in subroutine SNSQ). It has been found from several previous research projects (ASRHAE RP 825 and 1312) that SNSQ is especially unstable at the beginning of a simulation and sometimes unable to converge to a solution. Another issue is related to the zone model in the HVACSIM+ library of components. Dynamic simulation of secondary HVAC systems unavoidably requires an interacting zone model which is systematically and dynamically interacting with building surrounding. Therefore, the accuracy and reliability of the building zone model affects operational data generated by the developed dynamic tool to predict HVAC secondary systems function. The available model does not simulate the impact of direct solar radiation that enters a zone through glazing and the study of zone model is conducted in this direction to modify the existing zone model. In this research project, the following tasks are completed and summarized in this report: 1. Develop dynamic simulation models in the HVACSIM+ environment for common fan coil unit and dual duct system configurations. The developed simulation models are able to produce both fault-free and faulty operational data under a wide variety of faults and severity levels for advanced control, operation, and AFDD technology development and evaluation purposes; 2. Develop a model structure, which includes the grouping of blocks and superblocks, treatment of state variables, initial and boundary conditions, and selection of equation solver, that can simulate a dual duct system efficiently with satisfactory stability; 3. Design and conduct a comprehensive and systematic validation procedure using collected experimental data to validate the developed simulation models under both fault-free and faulty operational conditions; 4. Conduct a numerical study to compare two solution techniques: Powell's Hybrid (PH) and Levenberg-Marquardt (LM) in terms of their robustness and accuracy. 5. Modification of the thermal state of the existing building zone model in HVACSIM+ library of component. This component is revised to consider the transmitted heat through glazing as a heat source for transient building zone load prediction In this report, literature, including existing HVAC dynamic modeling environment and models, HVAC model validation methodologies, and fault modeling and validation methodologies, are reviewed. The overall methodologies used for fault free and fault model development and validation are introduced. Detailed model development and validation results for the two secondary systems, i.e., fan coil unit and dual duct system are summarized. Experimental data mostly from the Iowa Energy Center Energy Resource Station are used to validate the models developed in this project. Satisfactory model performance in both fault free and fault simulation studies is observed for all studied systems.

  3. Message Efficient Checkpointing and Rollback Recovery in Heterogeneous Mobile Networks

    NASA Astrophysics Data System (ADS)

    Jaggi, Parmeet Kaur; Singh, Awadhesh Kumar

    2016-06-01

    Heterogeneous networks provide an appealing way of expanding the computing capability of mobile networks by combining infrastructure-less mobile ad-hoc networks with the infrastructure-based cellular mobile networks. The nodes in such a network range from low-power nodes to macro base stations and thus, vary greatly in their capabilities such as computation power and battery power. The nodes are susceptible to different types of transient and permanent failures and therefore, the algorithms designed for such networks need to be fault-tolerant. The article presents a checkpointing algorithm for the rollback recovery of mobile hosts in a heterogeneous mobile network. Checkpointing is a well established approach to provide fault tolerance in static and cellular mobile distributed systems. However, the use of checkpointing for fault tolerance in a heterogeneous environment remains to be explored. The proposed protocol is based on the results of zigzag paths and zigzag cycles by Netzer-Xu. Considering the heterogeneity prevalent in the network, an uncoordinated checkpointing technique is employed. Yet, useless checkpoints are avoided without causing a high message overhead.

  4. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  5. Multiple fault separation and detection by joint subspace learning for the health assessment of wind turbine gearboxes

    NASA Astrophysics Data System (ADS)

    Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Zi, Yanyang; Yan, Ruqiang

    2017-09-01

    The gearbox of a wind turbine (WT) has dominant failure rates and highest downtime loss among all WT subsystems. Thus, gearbox health assessment for maintenance cost reduction is of paramount importance. The concurrence of multiple faults in gearbox components is a common phenomenon due to fault induction mechanism. This problem should be considered before planning to replace the components of the WT gearbox. Therefore, the key fault patterns should be reliably identified from noisy observation data for the development of an effective maintenance strategy. However, most of the existing studies focusing on multiple fault diagnosis always suffer from inappropriate division of fault information in order to satisfy various rigorous decomposition principles or statistical assumptions, such as the smooth envelope principle of ensemble empirical mode decomposition and the mutual independence assumption of independent component analysis. Thus, this paper presents a joint subspace learning-based multiple fault detection (JSL-MFD) technique to construct different subspaces adaptively for different fault patterns. Its main advantage is its capability to learn multiple fault subspaces directly from the observation signal itself. It can also sparsely concentrate the feature information into a few dominant subspace coefficients. Furthermore, it can eliminate noise by simply performing coefficient shrinkage operations. Consequently, multiple fault patterns are reliably identified by utilizing the maximum fault information criterion. The superiority of JSL-MFD in multiple fault separation and detection is comprehensively investigated and verified by the analysis of a data set of a 750 kW WT gearbox. Results show that JSL-MFD is superior to a state-of-the-art technique in detecting hidden fault patterns and enhancing detection accuracy.

  6. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  7. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  8. Use of spectral gamma ray as a lithology guide for fault rocks: A case study from the Wenchuan Earthquake Fault Scientific Drilling project Borehole 4 (WFSD-4).

    PubMed

    Amara Konaté, Ahmed; Pan, Heping; Ma, Huolin; Qin, Zhen; Guo, Bo; Yevenyo Ziggah, Yao; Kounga, Claude Ernest Moussounda; Khan, Nasir; Tounkara, Fodé

    2017-10-01

    The main purpose of the Wenchuan Earthquake Fault Scientific drilling project (WFSD) was to produce an in-depth borehole into the Yingxiu-Beichuan (YBF) and Anxian-Guanxian faults in order to gain a much better understanding of the physical and chemical properties as well as the mechanical faulting involved. Five boreholes, namely WFSD-1, WFSD-2, WFSD-3P, WFSD-3 and WFSD-4, were drilled during the project entirety. This study, therefore, presents first-hand WFSD-4 data on the lithology (original rocks) and fault rocks that have been obtained from the WFSD project. In an attempt to determine the physical properties and the clay minerals of the lithology and fault rocks, this study analyzed the spectral gamma ray logs (Total gamma ray, Potassium, Thorium and Uranium) recorded in WFSD-4 borehole on the Northern segment of the YBF. The obtained results are presented as cross-plots and statistical multi log analysis. Both lithology and fault rocks show a variability of spectral gamma ray (SGR) logs responses and clay minerals. This study has shown the capabilities of the SGR logs for well-logging of earthquake faults and proves that SGR logs together with others logs in combination with drill hole core description is a useful method of lithology and fault rocks characterization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A remote sensing study of active folding and faulting in southern Kerman province, S.E. Iran

    NASA Astrophysics Data System (ADS)

    Walker, Richard Thomas

    2006-04-01

    Geomorphological observations reveal a major oblique fold-and-thrust belt in Kerman province, S.E. Iran. The active faults appear to link the Sabzevaran right-lateral strike-slip fault in southeast Iran to other strike-slip faults within the interior of the country and may provide the means of distributing right-lateral shear between the Zagros and Makran mountains over a wider region of central Iran. The Rafsanjan fault is manifest at the Earth's surface as right-lateral strike-slip fault scarps and folding in alluvial sediments. Height changes across the anticlines, and widespread incision of rivers, are likely to result from hanging-wall uplift above thrust faults at depth. Scarps in recent alluvium along the northern margins of the folds suggest that the thrusts reach the surface and are active at the present-day. The observations from Rafsanjan are used to identify similar late Quaternary faulting elsewhere in Kerman province near the towns of Mahan and Rayen. No instrumentally recorded destructive earthquakes have occurred in the study region and only one historical earthquake (Lalehzar, 1923) is recorded. In addition GPS studies show that present-day rates of deformation are low. However, fault structures in southern Kerman province do appear to be active in the late Quaternary and may be capable of producing destructive earthquakes in the future. This study shows how widely available remote sensing data can be used to provide information on the distribution of active faulting across large areas of deformation.

  10. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis

    PubMed Central

    Castro, Alfonso; Sedano, Andrés A.; García, Fco. Javier; Villoslada, Eduardo

    2017-01-01

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica’s global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam. PMID:29283398

  11. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis.

    PubMed

    Castro, Alfonso; Sedano, Andrés A; García, Fco Javier; Villoslada, Eduardo; Villagrá, Víctor A

    2017-12-28

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica's global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam.

  12. Modeling and characterization of partially inserted electrical connector faults

    NASA Astrophysics Data System (ADS)

    Tokgöz, ćaǧatay; Dardona, Sameh; Soldner, Nicholas C.; Wheeler, Kevin R.

    2016-03-01

    Faults within electrical connectors are prominent in avionics systems due to improper installation, corrosion, aging, and strained harnesses. These faults usually start off as undetectable with existing inspection techniques and increase in magnitude during the component lifetime. Detection and modeling of these faults are significantly more challenging than hard failures such as open and short circuits. Hence, enabling the capability to locate and characterize the precursors of these faults is critical for timely preventive maintenance and mitigation well before hard failures occur. In this paper, an electrical connector model based on a two-level nonlinear least squares approach is proposed. The connector is first characterized as a transmission line, broken into key components such as the pin, socket, and connector halves. Then, the fact that the resonance frequencies of the connector shift as insertion depth changes from a fully inserted to a barely touching contact is exploited. The model precisely captures these shifts by varying only two length parameters. It is demonstrated that the model accurately characterizes a partially inserted connector.

  13. Diagnosis of Misalignment in Overhung Rotor using the K-S Statistic and A2 Test

    NASA Astrophysics Data System (ADS)

    Garikapati, Diwakar; Pacharu, RaviKumar; Munukurthi, Rama Satya Satyanarayana

    2018-02-01

    Vibration measurement at the bearings of rotating machinery has become a useful technique for diagnosing incipient fault conditions. In particular, vibration measurement can be used to detect unbalance in rotor, bearing failure, gear problems or misalignment between a motor shaft and coupled shaft. This is a particular problem encountered in turbines, ID fans and FD fans used for power generation. For successful fault diagnosis, it is important to adopt motor current signature analysis (MCSA) techniques capable of identifying the faults. It is also useful to develop techniques for inferring information such as the severity of fault. It is proposed that modeling the cumulative distribution function of motor current signals with respect to appropriate theoretical distributions, and quantifying the goodness of fit with the Kolmogorov-Smirnov (KS) statistic and A2 test offers a suitable signal feature for diagnosis. This paper demonstrates the successful comparison of the K-S feature and A2 test for discriminating the misalignment fault from normal function.

  14. Fuzzy-Wavelet Based Double Line Transmission System Protection Scheme in the Presence of SVC

    NASA Astrophysics Data System (ADS)

    Goli, Ravikumar; Shaik, Abdul Gafoor; Tulasi Ram, Sankara S.

    2015-06-01

    Increasing the power transfer capability and efficient utilization of available transmission lines, improving the power system controllability and stability, power oscillation damping and voltage compensation have made strides and created Flexible AC Transmission (FACTS) devices in recent decades. Shunt FACTS devices can have adverse effects on distance protection both in steady state and transient periods. Severe under reaching is the most important problem of relay which is caused by current injection at the point of connection to the system. Current absorption of compensator leads to overreach of relay. This work presents an efficient method based on wavelet transforms, fault detection, classification and location using Fuzzy logic technique which is almost independent of fault impedance, fault distance and fault inception angle. The proposed protection scheme is found to be fast, reliable and accurate for various types of faults on transmission lines with and without Static Var compensator at different locations and with various incidence angles.

  15. Geophysical setting of the 2000 ML 5.2 Yountville, California, earthquake: Implications for seismic Hazard in Napa Valley, California

    USGS Publications Warehouse

    Langenheim, V.E.; Graymer, R.W.; Jachens, R.C.

    2006-01-01

    The epicenter of the 2000 ML 5.2 Yountville earthquake was located 5 km west of the surface trace of the West Napa fault, as defined by Helley and Herd (1977). On the basis of the re-examination of geologic data and the analysis of potential field data, the earthquake occurred on a strand of the West Napa fault, the main basin-bounding fault along the west side of Napa Valley. Linear aeromagnetic anomalies and a prominent gravity gradient extend the length of the fault to the latitude of Calistoga, suggesting that this fault may be capable of larger-magnitude earthquakes. Gravity data indicate an ???2-km-deep basin centered on the town of Napa, where damage was concentrated during the Yountville earthquake. It most likely played a minor role in enhancing shaking during this event but may lead to enhanced shaking caused by wave trapping during a larger-magnitude earthquake.

  16. Fault detection and accommodation testing on an F100 engine in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Myers, L. P.; Baer-Riedhart, J. L.; Maxwell, M. D.

    1985-01-01

    The fault detection and accommodation (FDA) methodology for digital engine-control systems may range from simple comparisons of redundant parameters to the more complex and sophisticated observer models of the entire engine system. Evaluations of the various FDA schemes are done using analytical methods, simulation, and limited-altitude-facility testing. Flight testing of the FDA logic has been minimal because of the difficulty of inducing realistic faults in flight. A flight program was conducted to evaluate the fault detection and accommodation capability of a digital electronic engine control in an F-15 aircraft. The objective of the flight program was to induce selected faults and evaluate the resulting actions of the digital engine controller. Comparisons were made between the flight results and predictions. Several anomalies were found in flight and during the ground test. Simulation results showed that the inducement of dual pressure failures was not feasible since the FDA logic was not designed to accommodate these types of failures.

  17. Design Guidelines for Shielding Effectiveness, Current Carrying Capability, and the Enhancement of Conductivity of Composite Materials

    NASA Technical Reports Server (NTRS)

    Evans, R. W.

    1997-01-01

    These guidelines address the electrical properties of composite materials which may have an effect on electromagnetic compatibility (EMC). The main topics of the guidelines include the electrical shielding, fault current return, and lightning protection capabilities of graphite reinforced polymers, since they are somewhat conductive but may require enhancement to be adequate for EMC purposes. Shielding effectiveness depends heavily upon the conductivity of the material. Graphite epoxy can provide useful shielding against RF signals, but it is approximately 1,000 times more resistive than good conductive metals. The reduced shielding effectiveness is significant but is still useful in many cases. The primary concern is with gaps and seams in the material just as it is with metal. Current carrying capability of graphite epoxy is adequate for dissipation static charges, but fault currents through graphite epoxy may cause fire at the shorting contact and at joints. The effect of lightning on selected graphite epoxy material and mating surfaces is described, and protection methods are reviewed.

  18. A numerical investigation of pumping-test responses from contiguous aquifers

    NASA Astrophysics Data System (ADS)

    Rafini, Silvain; Chesnaux, Romain; Ferroud, Anouck

    2017-05-01

    Adequate groundwater management requires models capable of representing the heterogeneous nature of aquifers. A key point is the theoretical knowledge of flow behaviour in various heterogeneous archetypal conditions, using analytically or numerically based models. This study numerically investigates transient pressure transfers between linearly contiguous homogeneous domains with non-equal hydraulic properties, optionally separated by a conductive fault. Responses to pumping are analysed in terms of time-variant flow dimension, n. Two radial stages are predicted ( n: 2 - 2) with a positive or negative vertical offset depending of the transmissivity ratio between domains. A transitional n = 4 segment occurs when the non-pumped domain is more transmissive ( n: 2 - 4 - 2), and a fractional flow segment occurs when the interface is a fault ( n: 2 - 4 - 1.5 - 2). The hydrodynamics are generally governed by the transmissivity ratio; the storativity ratio impact is limited. The drawdown log-derivative late stabilization, recorded at any well, does not tend to reflect the local transmissivity but rather the higher transmissivity region, possibly distant and blind, as it predominantly supplies groundwater to the well. This study provides insights on the behaviour of non-uniform aquifers and on theoretical responses that can aid practitioners to detect such conditions in nature.

  19. The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator.

    PubMed

    Roh, S D; Kim, S W; Cho, W S

    2001-10-01

    The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator were accomplished. In the numerical modelling, two models applied to the modelling within the kiln are the combustion chamber model including the mass and energy balance equations for two combustion chambers and 3D thermal model. The combustion chamber model predicts temperature within the kiln, flue gas composition, flux and heat of combustion. Using the combustion chamber model and 3D thermal model, the production-rules for the process simulation can be obtained through interrelation analysis between control and operation variables. The process simulation of the kiln is operated with the production-rules for automatic operation. The process simulation aims to provide fundamental solutions to the problems in incineration process by introducing an online expert control system to provide an integrity in process control and management. Knowledge-based expert control systems use symbolic logic and heuristic rules to find solutions for various types of problems. It was implemented to be a hybrid intelligent expert control system by mutually connecting with the process control systems which has the capability of process diagnosis, analysis and control.

  20. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon

    2009-01-01

    Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.

  1. Solar Photovoltaic (PV) Distributed Generation Systems - Control and Protection

    NASA Astrophysics Data System (ADS)

    Yi, Zhehan

    This dissertation proposes a comprehensive control, power management, and fault detection strategy for solar photovoltaic (PV) distribution generations. Battery storages are typically employed in PV systems to mitigate the power fluctuation caused by unstable solar irradiance. With AC and DC loads, a PV-battery system can be treated as a hybrid microgrid which contains both DC and AC power resources and buses. In this thesis, a control power and management system (CAPMS) for PV-battery hybrid microgrid is proposed, which provides 1) the DC and AC bus voltage and AC frequency regulating scheme and controllers designed to track set points; 2) a power flow management strategy in the hybrid microgrid to achieve system generation and demand balance in both grid-connected and islanded modes; 3) smooth transition control during grid reconnection by frequency and phase synchronization control between the main grid and microgrid. Due to the increasing demands for PV power, scales of PV systems are getting larger and fault detection in PV arrays becomes challenging. High-impedance faults, low-mismatch faults, and faults occurred in low irradiance conditions tend to be hidden due to low fault currents, particularly, when a PV maximum power point tracking (MPPT) algorithm is in-service. If remain undetected, these faults can considerably lower the output energy of solar systems, damage the panels, and potentially cause fire hazards. In this dissertation, fault detection challenges in PV arrays are analyzed in depth, considering the crossing relations among the characteristics of PV, interactions with MPPT algorithms, and the nature of solar irradiance. Two fault detection schemes are then designed as attempts to address these technical issues, which detect faults inside PV arrays accurately even under challenging circumstances, e.g., faults in low irradiance conditions or high-impedance faults. Taking advantage of multi-resolution signal decomposition (MSD), a powerful signal processing technique based on discrete wavelet transformation (DWT), the first attempt is devised, which extracts the features of both line-to-line (L-L) and line-to-ground (L-G) faults and employs a fuzzy inference system (FIS) for the decision-making stage of fault detection. This scheme is then improved as the second attempt by further studying the system's behaviors during L-L faults, extracting more efficient fault features, and devising a more advanced decision-making stage: the two-stage support vector machine (SVM). For the first time, the two-stage SVM method is proposed in this dissertation to detect L-L faults in PV system with satisfactory accuracies. Numerous simulation and experimental case studies are carried out to verify the proposed control and protection strategies. Simulation environment is set up using the PSCAD/EMTDC and Matlab/Simulink software packages. Experimental case studies are conducted in a PV-battery hybrid microgrid using the dSPACE real-time controller to demonstrate the ease of hardware implementation and the controller performance. Another small-scale grid-connected PV system is set up to verify both fault detection algorithms which demonstrate promising performances and fault detecting accuracies.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riesen, Rolf E.; Bridges, Patrick G.; Stearley, Jon R.

    Next-generation exascale systems, those capable of performing a quintillion (10{sup 18}) operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systemsmore » due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoint) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms.« less

  3. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  4. Fault creep and strain partitioning in Trinidad-Tobago: Geodetic measurements, models, and origin of creep

    NASA Astrophysics Data System (ADS)

    La Femina, P.; Weber, J. C.; Geirsson, H.; Latchman, J. L.; Robertson, R. E. A.; Higgins, M.; Miller, K.; Churches, C.; Shaw, K.

    2017-12-01

    We studied active faults in Trinidad and Tobago in the Caribbean-South American (CA-SA) transform plate boundary zone using episodic GPS (eGPS) data from 19 sites and continuous GPS (cGPS) data from 8 sites, then by modeling these data using a series of simple screw dislocation models. Our best-fit model for interseismic (interseimic = between major earthquakes) fault slip requires: 12-15 mm/yr of right-lateral movement and very shallow locking (0.2 ± 0.2 km; essentially creep) across the Central Range Fault (CRF); 3.4 +0.3/-0.2 mm/yr across the Soldado Fault in south Trinidad, and 3.5 +0.3/-0.2 mm/yr of dextral shear on fault(s) between Trinidad and Tobago. The upper-crustal faults in Trinidad show very little seismicity (1954-current from local network) and do not appear to have generated significant historic earthquakes. However, paleoseismic studies indicate that the CRF ruptured between 2710 and 500 yr. B.P. and thus it was recently capable of storing elastic strain. Together, these data suggest spatial and/or temporal fault segmentation on the CRF. The CRF marks a physical boundary between rocks associated with thermogenically generated petroleum and over-pressured fluids in south and central Trinidad, from rocks containing only biogenic gas to the north, and a long string of active mud volcanoes align with the trace of the Soldado Fault along Trinidad's south coast. Fluid (oil and gas) overpressure, as an alternative or in addition to weak mineral phases in the fault zone, may thus cause the CRF fault creep and the lack of seismicity that we observe.

  5. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  6. The south San Fernando Valley fault, Los Angeles California: Myth or reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slosson, J.E.; Phipps, M.B.; Werner, S.L.

    1993-04-01

    Based on related geomorphic and hydrogeologic evidence, the authors have identified the probable existence of a fault system and related Riedel faults along the southerly side of the San Fernando Valley (SFV), Los Angeles, CA. This fault system, which appears to be aligned along a series of pressure ridges, artesian springs and warm water wells, is termed the South SFV Fault for the purpose of this study. The trace of this fault is believed to roughly follow the southern extent of the SFV near the northern base of the east-west trending Santa Monica Mountains. The SFV is a fault-affected synclinalmore » structure bounded on the north, east, and west by well-recognized and documented fault systems. The southern boundary of the SFV is defined by the complexly faulted anticlinal structure of the bordering Santa Monica Mountains. This presentation will suggest that the southern boundary of the SFV (syncline) is controlled by faulting similar to the fault-controlled north, east, and west boundaries. The authors believe that the trace of the fault system in the southeastern portion of the SFV has been somewhat modified and concealed by the erosion and deposition of coarse grained sediments derived from the vast granitic-metamorphic complex of the San Gabriel Mountains to the north, the major watershed, and in part by sediment derived from similar rock type to the east and southeast. The western half of the SFV has been largely filled with fine grained sediments derived from erosion of the surrounding sedimentary uplands. Further modification has occurred due to urbanization of the area. With reference to the fault-affected boundaries on the west, north, and east sides of the SFV, these structures are all considered youthfall and capable of producing earthquakes as the SFF did in 1971. The south-bounding fault may fall within a similar category. Accordingly, the authors believe that the proposed South SFV Fault has been a tectonic feature since the Pliocene epoch.« less

  7. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.; Johnson, Stephen B.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to provide formal connectivity between the nominal (SE), and off-nominal (SHM and FM) aspects of functions and designs. This paper describes a formal modeling approach to the initial phases of the development process that integrates the nominal and off-nominal perspectives in a model that unites SE goals and functions of with the failure to achieve goals and functions (SHM/FM). This methodology and corresponding model, known as a Goal-Function Tree (GFT), provides a means to represent, decompose, and elaborate system goals and functions in a rigorous manner that connects directly to design through use of state variables that translate natural language requirements and goals into logical-physical state language. The state variable-based approach also provides the means to directly connect FM to the design, by specifying the range in which state variables must be controlled to achieve goals, and conversely, the failures that exist if system behavior go out-of-range. This in turn allows for the systems engineers and SHM/FM engineers to determine which state variables to monitor, and what action(s) to take should the system fail to achieve that goal. In sum, the GFT representation provides a unified approach to early-phase SE and FM development. This representation and methodology has been successfully developed and implemented using Systems Modeling Language (SysML) on the NASA Space Launch System (SLS) Program. It enabled early design trade studies of failure detection coverage to ensure complete detection coverage of all crew-threatening failures. The representation maps directly both to FM algorithm designs, and to failure scenario definitions needed for design analysis and testing. The GFT representation provided the basis for mapping of abort triggers into scenarios, both needed for initial, and successful quantitative analyses of abort effectiveness (detection and response to crew-threatening events).

  8. Lacustrine Paleoseismology Reveals Earthquake Segmentation of the Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Howarth, J. D.; Fitzsimons, S.; Norris, R.; Langridge, R. M.

    2013-12-01

    Transform plate boundary faults accommodate high rates of strain and are capable of producing large (Mw>7.0) to great (Mw>8.0) earthquakes that pose significant seismic hazard. The Alpine Fault in New Zealand is one of the longest, straightest and fastest slipping plate boundary transform faults on Earth and produces earthquakes at quasi-periodic intervals. Theoretically, the fault's linearity, isolation from other faults and quasi-periodicity should promote the generation of earthquakes that have similar magnitudes over multiple seismic cycles. We test the hypothesis that the Alpine Fault produces quasi-regular earthquakes that contiguously rupture the southern and central fault segments, using a novel lacustrine paleoseismic proxy to reconstruct spatial and temporal patterns of fault rupture over the last 2000 years. In three lakes located close to the Alpine Fault the last nine earthquakes are recorded as megaturbidites formed by co-seismic subaqueous slope failures, which occur when shaking exceeds Modified Mercalli (MM) VII. When the fault ruptures adjacent to a lake the co-seismic megaturbidites are overlain by stacks of turbidites produced by enhanced fluvial sediment fluxes from earthquake-induced landslides. The turbidite stacks record shaking intensities of MM>IX in the lake catchments and can be used to map the spatial location of fault rupture. The lake records can be dated precisely, facilitating meaningful along strike correlations, and the continuous records allow earthquakes closely spaced in time on adjacent fault segments to be distinguished. The results show that while multi-segment ruptures of the Alpine Fault occurred during most seismic cycles, sequential earthquakes on adjacent segments and single segment ruptures have also occurred. The complexity of the fault rupture pattern suggests that the subtle variations in fault geometry, sense of motion and slip rate that have been used to distinguish the central and southern segments of the Alpine Fault can inhibit rupture propagation, producing a soft earthquake segment boundary. The study demonstrates the utility of lakes as paleoseismometers that can be used to reconstruct the spatial and temporal patterns of earthquakes on a fault.

  9. Seismic sources in El Salvador. A geological and geodetic contribution

    NASA Astrophysics Data System (ADS)

    Alonso-Henar, J.; Martínez-Díaz, J. J.; Benito, B.; Alvarez-Gomez, J. A.; Canora, C.; Capote, R.; Staller, A.; Tectónica Activa, Paleosismicidad y. Riesgos Asociados UCM-910368

    2013-05-01

    El Salvador Fault Zone is a deformation band of 150 km long and 20 km wide within the Salvadorian volcanic arc. This shear band distributes the deformation between main strike-slip faults trending N90°-100°E and around 30 km long, and secondary normal faults trending between N120°E and N170°E. The ESFZ continues westward and is relieved by the Jalpatagua Fault. Eastward ESFZ becomes less clear disappearing at Golfo de Fonseca. The ESFZ deforms and offsets quaternary deposits with a right lateral movement in its main segments. Five segments have been proposed for the whole fault zone, from the Jalpatagua Fault to the Golfo de Fonseca. Paleoseismic studies in the Berlin and San Vicente Segments reveal an important amount of quaternary deformation. In fact, the San Vicente Segment was the source of the February 13, 2001 destructive earthquake. In this work we propose 18 capable seismic sources within El Salvador. The slip rate of each source has been obtained through out the combination of GPS data and paleoseismic data when it has been possible. We also have calculated maximum theoretical intensities produced by the maximum earthquakes related with each fault. We have taken into account several scenarios considering different possible surface rupture lengths up to 50 km and Mw 7.6 in some of the strike slip faults within ESFZ.

  10. Ste. Genevieve Fault Zone, Missouri and Illinois. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.J.; Lumm, D.K.

    1985-07-01

    The Ste. Genevieve Fault Zone is a major structural feature which strikes NW-SE for about 190 km on the NE flank of the Ozark Dome. There is up to 900 m of vertical displacement on high angle normal and reverse faults in the fault zone. At both ends the Ste. Genevieve Fault Zone dies out into a monocline. Two periods of faulting occurred. The first was in late Middle Devonian time and the second from latest Mississippian through early Pennsylvanian time, with possible minor post-Pennsylvanian movement. No evidence was found to support the hypothesis that the Ste. Genevieve Fault Zonemore » is part of a northwestward extension of the late Precambrian-early Cambrian Reelfoot Rift. The magnetic and gravity anomalies cited in support of the ''St. Louis arm'' of the Reelfoot Rift possible reflect deep crystal features underlying and older than the volcanic terrain of the St. Francois Mountains (1.2 to 1.5 billion years old). In regard to neotectonics no displacements of Quaternary sediments have been detected, but small earthquakes occur from time to time along the Ste. Genevieve Fault Zone. Many faults in the zone appear capable of slipping under the current stress regime of east-northeast to west-southwest horizontal compression. We conclude that the zone may continue to experience small earth movements, but catastrophic quakes similar to those at New Madrid in 1811-12 are unlikely. 32 figs., 1 tab.« less

  11. The Design of a Fault-Tolerant COTS-Based Bus Architecture

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Burt, John B.; Tai, Ann T.

    1999-01-01

    In this paper, we report our experiences and findings on the design of a fault-tolerant bus architecture comprised of two COTS buses, the IEEE 1394 and the 12C. This fault-tolerant bus is the backbone system bus for the avionics architecture of the X2000 program at the Jet Propulsion Laboratory. COTS buses are attractive because of the availability of low cost commercial products. However, they are not specifically designed for highly reliable applications such as long-life deep-space missions. The X2000 design team has devised a multi-level fault tolerance approach to compensate for this shortcoming of COTS buses. First, the approach enhances the fault tolerance capabilities of the IEEE 1394 and 12 C buses by adding a layer of fault handling hardware and software. Second, algorithms are developed to enable the IEEE 1394 and the 12 C buses assist each other to isolate and recovery from faults. Third, the set of IEEE 1394 and 12 C buses is duplicated to further enhance system reliability. The X2000 design team has paid special attention to guarantee that all fault tolerance provisions will not cause the bus design to deviate from the commercial standard specifications. Otherwise, the economic attractiveness of using COTS will be diminished. The hardware and software design of the X2000 fault-tolerant bus are being implemented and flight hardware will be delivered to the ST4 and Europa Orbiter missions.

  12. Directivity models produced for the Next Generation Attenuation West 2 (NGA-West 2) project

    USGS Publications Warehouse

    Spudich, Paul A.; Watson-Lamprey, Jennie; Somerville, Paul G.; Bayless, Jeff; Shahi, Shrey; Baker, Jack W.; Rowshandel, Badie; Chiou, Brian

    2012-01-01

    Five new directivity models are being developed for the NGA-West 2 project. All are based on the NGA-West 2 data base, which is considerably expanded from the original NGA-West data base, containing about 3,000 more records from earthquakes having finite-fault rupture models. All of the new directivity models have parameters based on fault dimension in km, not normalized fault dimension. This feature removes a peculiarity of previous models which made them inappropriate for modeling large magnitude events on long strike-slip faults. Two models are explicitly, and one is implicitly, 'narrowband' models, in which the effect of directivity does not monotonically increase with spectral period but instead peaks at a specific period that is a function of earthquake magnitude. These narrowband models' functional forms are capable of simulating directivity over a wider range of earthquake magnitude than previous models. The functional forms of the five models are presented.

  13. Dual-quaternion based fault-tolerant control for spacecraft formation flying with finite-time convergence.

    PubMed

    Dong, Hongyang; Hu, Qinglei; Ma, Guangfu

    2016-03-01

    Study results of developing control system for spacecraft formation proximity operations between a target and a chaser are presented. In particular, a coupled model using dual quaternion is employed to describe the proximity problem of spacecraft formation, and a nonlinear adaptive fault-tolerant feedback control law is developed to enable the chaser spacecraft to track the position and attitude of the target even though its actuator occurs fault. Multiple-task capability of the proposed control system is further demonstrated in the presence of disturbances and parametric uncertainties as well. In addition, the practical finite-time stability feature of the closed-loop system is guaranteed theoretically under the designed control law. Numerical simulation of the proposed method is presented to demonstrate the advantages with respect to interference suppression, fast tracking, fault tolerant and practical finite-time stability. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Increase in fault ride through capability of direct drive permanent magnet based wind farm using VSC-HVDC

    NASA Astrophysics Data System (ADS)

    Maleki, Hesamaldin; Ramachandaramurthy, V. K.; Lak, Moein

    2013-06-01

    Burning of fossil fuels and green house gasses causes global warming. This has led to governments to explore the use of green energies instead of fossil fuels. The availability of wind has made wind technology a viable alternative for generating electrical power. Hence, many parts of the world, especially Europe are experiencing a growth in wind farms. However, by increasing the number of wind farms connected to the grid, power quality and voltage stability of grid becomes a matter of concern. In this paper, VSC-HVDC control strategy which enables the wind farm to ride-through faults and regulate voltage for fault types is proposed. The results show that the wind turbine output voltage fulfills the E.ON grid code requirements, when subjected to three phase to ground fault. Hence, continues operation of the wind farm is achieved.

  15. Slip-pulse rupture behavior on a 2 meter granite fault

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Beeler, Nicholas M.

    2015-01-01

    We describe observations of dynamic rupture events that spontaneously arise on meter-scale laboratory earthquake experiments. While low-frequency slip of the granite sample occurs in a relatively uniform and crack-like manner, instruments capable of detecting high frequency motions show that some parts of the fault slip abruptly (velocity >100 mm∙s-1, acceleration >20 km∙s-2) while the majority of the fault slips more slowly. Abruptly slipping regions propagate along the fault at nearly the shear wave speed. We propose that the dramatic reduction in frictional strength implied by this pulse-like rupture behavior has a common mechanism to the weakening reported in high velocity friction experiments performed on rotary machines. The slip pulses can also be identified as migrating sources of high frequency seismic waves. As observations from large earthquakes show similar propagating high frequency sources, the pulses described here may have relevance to the mechanics of larger earthquakes.

  16. Multiple Embedded Processors for Fault-Tolerant Computing

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  17. An evaluation of a real-time fault diagnosis expert system for aircraft applications

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Abbott, Kathy H.; Palmer, Michael T.; Ricks, Wendell R.

    1987-01-01

    A fault monitoring and diagnosis expert system called Faultfinder was conceived and developed to detect and diagnose in-flight failures in an aircraft. Faultfinder is an automated intelligent aid whose purpose is to assist the flight crew in fault monitoring, fault diagnosis, and recovery planning. The present implementation of this concept performs monitoring and diagnosis for a generic aircraft's propulsion and hydraulic subsystems. This implementation is capable of detecting and diagnosing failures of known and unknown (i.e., unforseeable) type in a real-time environment. Faultfinder uses both rule-based and model-based reasoning strategies which operate on causal, temporal, and qualitative information. A preliminary evaluation is made of the diagnostic concepts implemented in Faultfinder. The evaluation used actual aircraft accident and incident cases which were simulated to assess the effectiveness of Faultfinder in detecting and diagnosing failures. Results of this evaluation, together with the description of the current Faultfinder implementation, are presented.

  18. Automation in the Space Station module power management and distribution Breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Lollar, Louis F.

    1990-01-01

    The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.

  19. Integration of National Guard Medical Capabilities During Domestic Disasters: Developing the Synergies

    DTIC Science & Technology

    2009-03-20

    A major earthquake along the New Madrid fault3 or the urban detonation of a Nuclear weapon would be devastating not only in its injuries and loss...injuries and deaths, and destroy the very infrastructure needed to respond. For example, a major earthquake on the New Madrid fault in the Mississippi... Earthquake Potential of the New Madrid Seismic Zone. Bulletin of the Seismological Society of America. 2002;92(6):2080-2089. 27 38. Bonnett CJ, Schock

  20. Seismicity and Crustal Anisotropy Beneath the Western Segment of the North Anatolian Fault: Results from a Dense Seismic Array

    NASA Astrophysics Data System (ADS)

    Turkelli, N.; Teoman, U.; Altuncu Poyraz, S.; Cambaz, D.; Mutlu, A. K.; Kahraman, M.; Houseman, G. A.; Rost, S.; Thompson, D. A.; Cornwell, D. G.; Utkucu, M.; Gülen, L.

    2013-12-01

    The North Anatolian Fault (NAF) is one of the major strike slip fault systems on Earth comparable to San Andreas Fault in some ways. Devastating earthquakes have occurred along this system causing major damage and casualties. In order to comprehensively investigate the shallow and deep crustal structure beneath the western segment of NAF, a temporary dense seismic network for North Anatolia (DANA) consisting of 73 broadband sensors was deployed in early May 2012 surrounding a rectangular grid of by 70 km and a nominal station spacing of 7 km with the aim of further enhancing the detection capability of this dense seismic array. This joint project involves researchers from University of Leeds, UK, Bogazici University Kandilli Observatory and Earthquake Research Institute (KOERI), and University of Sakarya and primarily focuses on upper crustal studies such as earthquake locations (especially micro-seismic activity), receiver functions, moment tensor inversions, shear wave splitting, and ambient noise correlations. To begin with, we obtained the hypocenter locations of local earthquakes that occured within the DANA network. The dense 2-D grid geometry considerably enhanced the earthquake detection capability which allowed us to precisely locate events with local magnitudes (Ml) less than 1.0. Accurate earthquake locations will eventually lead to high resolution images of the upper crustal structure beneath the northern and southern branches of NAF in Sakarya region. In order to put additional constraints on the active tectonics of the western part of NAF, we also determined fault plane solutions using Regional Moment Tensor Inversion (RMT) and P wave first motion methods. For the analysis of high quality fault plane solutions, data from KOERI and the DANA project were merged. Furthermore, with the aim of providing insights on crustal anisotropy, shear wave splitting parameters such as lag time and fast polarization direction were obtained for local events recorded within the seismic network with magnitudes larger than 2.5.

  1. Elastic rebound following the Kocaeli earthquake, Turkey, recorded using synthetic aperture radar interferometry

    USGS Publications Warehouse

    Mayer, Larry; Lu, Zhong

    2001-01-01

    A basic model incorporating satellite synthetic aperture radar (SAR) interferometry of the fault rupture zone that formed during the Kocaeli earthquake of August 17, 1999, documents the elastic rebound that resulted from the concomitant elastic strain release along the North Anatolian fault. For pure strike-slip faults, the elastic rebound function derived from SAR interferometry is directly invertible from the distribution of elastic strain on the fault at criticality, just before the critical shear stress was exceeded and the fault ruptured. The Kocaeli earthquake, which was accompanied by as much as ∼5 m of surface displacement, distributed strain ∼110 km around the fault prior to faulting, although most of it was concentrated in a narrower and asymmetric 10-km-wide zone on either side of the fault. The use of SAR interferometry to document the distribution of elastic strain at the critical condition for faulting is clearly a valuable tool, both for scientific investigation and for the effective management of earthquake hazard.

  2. ISHM-oriented adaptive fault diagnostics for avionics based on a distributed intelligent agent system

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei

    2015-10-01

    In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.

  3. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  4. Remote Structural Health Monitoring and Advanced Prognostics of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas Brown; Bernard Laskowski

    The prospect of substantial investment in wind energy generation represents a significant capital investment strategy. In order to maximize the life-cycle of wind turbines, associated rotors, gears, and structural towers, a capability to detect and predict (prognostics) the onset of mechanical faults at a sufficiently early stage for maintenance actions to be planned would significantly reduce both maintenance and operational costs. Advancement towards this effort has been made through the development of anomaly detection, fault detection and fault diagnosis routines to identify selected fault modes of a wind turbine based on available sensor data preceding an unscheduled emergency shutdown. Themore » anomaly detection approach employs spectral techniques to find an approximation of the data using a combination of attributes that capture the bulk of variability in the data. Fault detection and diagnosis (FDD) is performed using a neural network-based classifier trained from baseline and fault data recorded during known failure conditions. The approach has been evaluated for known baseline conditions and three selected failure modes: pitch rate failure, low oil pressure failure and a gearbox gear-tooth failure. Experimental results demonstrate the approach can distinguish between these failure modes and normal baseline behavior within a specified statistical accuracy.« less

  5. Machine fault feature extraction based on intrinsic mode functions

    NASA Astrophysics Data System (ADS)

    Fan, Xianfeng; Zuo, Ming J.

    2008-04-01

    This work employs empirical mode decomposition (EMD) to decompose raw vibration signals into intrinsic mode functions (IMFs) that represent the oscillatory modes generated by the components that make up the mechanical systems generating the vibration signals. The motivation here is to develop vibration signal analysis programs that are self-adaptive and that can detect machine faults at the earliest onset of deterioration. The change in velocity of the amplitude of some IMFs over a particular unit time will increase when the vibration is stimulated by a component fault. Therefore, the amplitude acceleration energy in the intrinsic mode functions is proposed as an indicator of the impulsive features that are often associated with mechanical component faults. The periodicity of the amplitude acceleration energy for each IMF is extracted by spectrum analysis. A spectrum amplitude index is introduced as a method to select the optimal result. A comparison study of the method proposed here and some well-established techniques for detecting machinery faults is conducted through the analysis of both gear and bearing vibration signals. The results indicate that the proposed method has superior capability to extract machine fault features from vibration signals.

  6. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  7. Petrophysical, Geochemical, and Hydrological Evidence for Extensive Fracture-Mediated Fluid and Heat Transport in the Alpine Fault's Hanging-Wall Damage Zone

    NASA Astrophysics Data System (ADS)

    Townend, John; Sutherland, Rupert; Toy, Virginia G.; Doan, Mai-Linh; Célérier, Bernard; Massiot, Cécile; Coussens, Jamie; Jeppson, Tamara; Janku-Capova, Lucie; Remaud, Léa.; Upton, Phaedra; Schmitt, Douglas R.; Pezard, Philippe; Williams, Jack; Allen, Michael John; Baratin, Laura-May; Barth, Nicolas; Becroft, Leeza; Boese, Carolin M.; Boulton, Carolyn; Broderick, Neil; Carpenter, Brett; Chamberlain, Calum J.; Cooper, Alan; Coutts, Ashley; Cox, Simon C.; Craw, Lisa; Eccles, Jennifer D.; Faulkner, Dan; Grieve, Jason; Grochowski, Julia; Gulley, Anton; Hartog, Arthur; Henry, Gilles; Howarth, Jamie; Jacobs, Katrina; Kato, Naoki; Keys, Steven; Kirilova, Martina; Kometani, Yusuke; Langridge, Rob; Lin, Weiren; Little, Tim; Lukacs, Adrienn; Mallyon, Deirdre; Mariani, Elisabetta; Mathewson, Loren; Melosh, Ben; Menzies, Catriona; Moore, Jo; Morales, Luis; Mori, Hiroshi; Niemeijer, André; Nishikawa, Osamu; Nitsch, Olivier; Paris, Jehanne; Prior, David J.; Sauer, Katrina; Savage, Martha K.; Schleicher, Anja; Shigematsu, Norio; Taylor-Offord, Sam; Teagle, Damon; Tobin, Harold; Valdez, Robert; Weaver, Konrad; Wiersberg, Thomas; Zimmer, Martin

    2017-12-01

    Fault rock assemblages reflect interaction between deformation, stress, temperature, fluid, and chemical regimes on distinct spatial and temporal scales at various positions in the crust. Here we interpret measurements made in the hanging-wall of the Alpine Fault during the second stage of the Deep Fault Drilling Project (DFDP-2). We present observational evidence for extensive fracturing and high hanging-wall hydraulic conductivity (˜10-9 to 10-7 m/s, corresponding to permeability of ˜10-16 to 10-14 m2) extending several hundred meters from the fault's principal slip zone. Mud losses, gas chemistry anomalies, and petrophysical data indicate that a subset of fractures intersected by the borehole are capable of transmitting fluid volumes of several cubic meters on time scales of hours. DFDP-2 observations and other data suggest that this hydrogeologically active portion of the fault zone in the hanging-wall is several kilometers wide in the uppermost crust. This finding is consistent with numerical models of earthquake rupture and off-fault damage. We conclude that the mechanically and hydrogeologically active part of the Alpine Fault is a more dynamic and extensive feature than commonly described in models based on exhumed faults. We propose that the hydrogeologically active damage zone of the Alpine Fault and other large active faults in areas of high topographic relief can be subdivided into an inner zone in which damage is controlled principally by earthquake rupture processes and an outer zone in which damage reflects coseismic shaking, strain accumulation and release on interseismic timescales, and inherited fracturing related to exhumation.

  8. An Assessment Methodology to Evaluate In-Flight Engine Health Management Effectiveness

    NASA Astrophysics Data System (ADS)

    Maggio, Gaspare; Belyeu, Rebecca; Pelaccio, Dennis G.

    2002-01-01

    flight effectiveness of candidate engine health management system concepts. A next generation engine health management system will be required to be both reliable and robust in terms of anomaly detection capability. The system must be able to operate successfully in the hostile, high-stress engine system environment. This implies that its system components, such as the instrumentation, process and control, and vehicle interface and support subsystems, must be highly reliable. Additionally, the system must be able to address a vast range of possible engine operation anomalies through a host of different types of measurements supported by a fast algorithm/architecture processing capability that can identify "true" (real) engine operation anomalies. False anomaly condition reports for such a system must be essentially eliminated. The accuracy of identifying only real anomaly conditions has been an issue with the Space Shuttle Main Engine (SSME) in the past. Much improvement in many of the technologies to address these areas is required. The objectives of this study were to identify and demonstrate a consistent assessment methodology that can evaluate the capability of next generation engine health management system concepts to respond in a correct, timely manner to alleviate an operational engine anomaly condition during flight. Science Applications International Corporation (SAIC), with support from NASA Marshall Space Flight Center, identified a probabilistic modeling approach to assess engine health management system concept effectiveness using a deterministic anomaly-time event assessment modeling approach that can be applied in the engine preliminary design stage of development to assess engine health management system concept effectiveness. Much discussion in this paper focuses on the formulation and application approach in performing this assessment. This includes detailed discussion of key modeling assumptions, the overall assessment methodology approach identified, and the identification of key supporting engine health management system concept design/operation and fault mode information required to utilize this methodology. At the paper's conclusion, discussion focuses on a demonstration benchmark study that applied this methodology to the current SSME health management system. A summary of study results and lessons learned are provided. Recommendations for future work in this area are also identified at the conclusion of the paper. * Please direct all correspondence/communication pertaining to this paper to Dennis G. Pelaccio, Science

  9. Intrinsic Hardware Evolution for the Design and Reconfiguration of Analog Speed Controllers for a DC Motor

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Ferguson, Michael I.

    2003-01-01

    Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a second generation Field Programmable Transistor Array (FPTA2). The performance of an evolved controller is compared to that of a conventional proportional-integral (PI) controller. It is shown that hardware evolution is able to create a compact design that provides good performance, while using considerably less functional electronic components than the conventional design. Additionally, the use of hardware evolution to provide fault tolerance by reconfiguring the design is explored. Experimental results are presented showing that significant recovery of capability can be made in the face of damaging induced faults.

  10. A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

  11. Motion-Based System Identification and Fault Detection and Isolation Technologies for Thruster Controlled Spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Sutter, David W.; Berkovitz, Dustin; Betts, Bradley J.; Kong, Edmund; delMundo, Rommel; Lages, Christopher R.; Mah, Robert W.; Papasin, Richard

    2003-01-01

    By analyzing the motions of a thruster-controlled spacecraft, it is possible to provide on-line (1) thruster fault detection and isolation (FDI), and (2) vehicle mass- and thruster-property identification (ID). Technologies developed recently at NASA Ames have significantly improved the speed and accuracy of these ID and FDI capabilities, making them feasible for application to a broad class of spacecraft. Since these technologies use existing sensors, the improved system robustness and performance that comes with the thruster fault tolerance and system ID can be achieved through a software-only implementation. This contrasts with the added cost, mass, and hardware complexity commonly required by FDI. Originally developed in partnership with NASA - Johnson Space Center to provide thruster FDI capability for the X-38 during re-entry, these technologies are most recently being applied to the MIT SPHERES experimental spacecraft to fly on the International Space Station in 2004. The model-based FDI uses a maximum-likelihood calculation at its core, while the ID is based upon recursive least squares estimation. Flight test results from the SPHERES implementation, as flown aboard the NASA KC-1 35A 0-g simulator aircraft in November 2003 are presented.

  12. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  13. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE PAGES

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...

    2017-02-11

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  14. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less

  15. On the design of fault-tolerant robotic manipulator systems

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert

    1993-01-01

    Robotic systems are finding increasing use in space applications. Many of these devices are going to be operational on board the Space Station Freedom. Fault tolerance has been deemed necessary because of the criticality of the tasks and the inaccessibility of the systems to maintenance and repair. Design for fault tolerance in manipulator systems is an area within robotics that is without precedence in the literature. In this paper, we will attempt to lay down the foundations for such a technology. Design for fault tolerance demands new and special approaches to design, often at considerable variance from established design practices. These design aspects, together with reliability evaluation and modeling tools, are presented. Mechanical architectures that employ protective redundancies at many levels and have a modular architecture are then studied in detail. Once a mechanical architecture for fault tolerance has been derived, the chronological stages of operational fault tolerance are investigated. Failure detection, isolation, and estimation methods are surveyed, and such methods for robot sensors and actuators are derived. Failure recovery methods are also presented for each of the protective layers of redundancy. Failure recovery tactics often span all of the layers of a control hierarchy. Thus, a unified framework for decision-making and control, which orchestrates both the nominal redundancy management tasks and the failure management tasks, has been derived. The well-developed field of fault-tolerant computers is studied next, and some design principles relevant to the design of fault-tolerant robot controllers are abstracted. Conclusions are drawn, and a road map for the design of fault-tolerant manipulator systems is laid out with recommendations for a 10 DOF arm with dual actuators at each joint.

  16. The southern Whidbey Island fault: An active structure in the Puget Lowland, Washington

    USGS Publications Warehouse

    Johnson, S.Y.; Potter, C.J.; Armentrout, J.M.; Miller, J.J.; Finn, C.; Weaver, C.S.

    1996-01-01

    Information from seismic-reflection profiles, outcrops, boreholes, and potential field surveys is used to interpret the structure and history of the southern Whidbey Island fault in the Puget Lowland of western Washington. This northwest-trending fault comprises a broad (as wide as 6-11 km), steep, northeast-dipping zone that includes several splays with inferred strike-slip, reverse, and thrust displacement. Transpressional deformation along the southern Whidbey Island fault is indicated by alongstrike variations in structural style and geometry, positive flower structure, local unconformities, out-of-plane displacements, and juxtaposition of correlative sedimentary units with different histories. The southern Whidbey Island fault represents a segment of a boundary between two major crustal blocks. The Cascade block to the northeast is floored by diverse assemblages of pre-Tertiary rocks; the Coast Range block to the southwest is floored by lower Eocene marine basaltic rocks of the Crescent Formation. The fault probably originated during the early Eocene as a dextral strike-slip fault along the eastern side of a continental-margin rift. Bending of the fault and transpressional deformation began during the late middle Eocene and continues to the present. Oblique convergence and clockwise rotation along the continental margin are the inferred driving forces for ongoing deformation. Evidence for Quaternary movement on the southern Whidbey Island fault includes (1) offset and disrupted upper Quaternary strata imaged on seismic-reflection profiles; (2) borehole data that suggests as much as 420 m of structural relief on the Tertiary-Quaternary boundary in the fault zone; (3) several meters of displacement along exposed faults in upper Quaternary sediments; (4) late Quaternary folds with limb dips of as much as ???9??; (5) large-scale liquefaction features in upper Quaternary sediments within the fault zone; and (6) minor historical seismicity. The southern Whidbey Island fault should be considered capable of generating large earthquakes (Ms ???7) and represents a potential seismic hazard to residents of the Puget Lowland.

  17. Regional Tectonic Control of Tertiary Mineralization and Recent Faulting in the Southern Basin-Range Province, an Application of ERTS-1 Data

    NASA Technical Reports Server (NTRS)

    Bechtold, I. C.; Liggett, M. A.; Childs, J. F.

    1973-01-01

    Research based on ERTS-1 MSS imagery and field work in the southern Basin-Range Province of California, Nevada and Arizona has shown regional tectonic control of volcanism, plutonism, mineralization and faulting. This paper covers an area centered on the Colorado River between 34 15' N and 36 45' N. During the mid-Tertiary, the area was the site of plutonism and genetically related volcanism fed by fissure systems now exposed as dike swarms. Dikes, elongate plutons, and coeval normal faults trend generally northward and are believed to have resulted from east-west crustal extension. In the extensional province, gold silver mineralization is closely related to Tertiary igneous activity. Similarities in ore, structural setting, and rock types define a metallogenic district of high potential for exploration. The ERTS imagery also provides a basis for regional inventory of small faults which cut alluvium. This capability for efficient regional surveys of Recent faulting should be considered in land use planning, geologic hazards study, civil engineering and hydrology.

  18. Satellite operations support expert system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Satellite Operations Support Expert System is an effort to identify aspects of satellite ground support activity which could profitably be automated with artificial intelligence (AI) and to develop a feasibility demonstration for the automation of one such area. The hydrazine propulsion subsystems (HPS) of the International Sun Earth Explorer (ISEE) and the International Ultraviolet Explorer (IUS) were used as applications domains. A demonstration fault handling system was built. The system was written in Franz Lisp and is currently hosted on a VAX 11/750-11/780 family machine. The system allows the user to select which HPS (either from ISEE or IUE) is used. Then the user chooses the fault desired for the run. The demonstration system generates telemetry corresponding to the particular fault. The completely separate fault handling module then uses this telemetry to determine what and where the fault is and how to work around it. Graphics are used to depict the structure of the HPS, and the telemetry values displayed on the screen are continually updated. The capabilities of this system and its development cycle are described.

  19. A Review of Transmission Diagnostics Research at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Zakajsek, James J.

    1994-01-01

    This paper presents a summary of the transmission diagnostics research work conducted at NASA Lewis Research Center over the last four years. In 1990, the Transmission Health and Usage Monitoring Research Team at NASA Lewis conducted a survey to determine the critical needs of the diagnostics community. Survey results indicated that experimental verification of gear and bearing fault detection methods, improved fault detection in planetary systems, and damage magnitude assessment and prognostics research were all critical to a highly reliable health and usage monitoring system. In response to this, a variety of transmission fault detection methods were applied to experimentally obtained fatigue data. Failure modes of the fatigue data include a variety of gear pitting failures, tooth wear, tooth fracture, and bearing spalling failures. Overall results indicate that, of the gear fault detection techniques, no one method can successfully detect all possible failure modes. The more successful methods need to be integrated into a single more reliable detection technique. A recently developed method, NA4, in addition to being one of the more successful gear fault detection methods, was also found to exhibit damage magnitude estimation capabilities.

  20. An improved PCA method with application to boiler leak detection.

    PubMed

    Sun, Xi; Marquez, Horacio J; Chen, Tongwen; Riaz, Muhammad

    2005-07-01

    Principal component analysis (PCA) is a popular fault detection technique. It has been widely used in process industries, especially in the chemical industry. In industrial applications, achieving a sensitive system capable of detecting incipient faults, which maintains the false alarm rate to a minimum, is a crucial issue. Although a lot of research has been focused on these issues for PCA-based fault detection and diagnosis methods, sensitivity of the fault detection scheme versus false alarm rate continues to be an important issue. In this paper, an improved PCA method is proposed to address this problem. In this method, a new data preprocessing scheme and a new fault detection scheme designed for Hotelling's T2 as well as the squared prediction error are developed. A dynamic PCA model is also developed for boiler leak detection. This new method is applied to boiler water/steam leak detection with real data from Syncrude Canada's utility plant in Fort McMurray, Canada. Our results demonstrate that the proposed method can effectively reduce false alarm rate, provide effective and correct leak alarms, and give early warning to operators.

  1. Combination of process and vibration data for improved condition monitoring of industrial systems working under variable operating conditions

    NASA Astrophysics Data System (ADS)

    Ruiz-Cárcel, C.; Jaramillo, V. H.; Mba, D.; Ottewill, J. R.; Cao, Y.

    2016-01-01

    The detection and diagnosis of faults in industrial processes is a very active field of research due to the reduction in maintenance costs achieved by the implementation of process monitoring algorithms such as Principal Component Analysis, Partial Least Squares or more recently Canonical Variate Analysis (CVA). Typically the condition of rotating machinery is monitored separately using vibration analysis or other specific techniques. Conventional vibration-based condition monitoring techniques are based on the tracking of key features observed in the measured signal. Typically steady-state loading conditions are required to ensure consistency between measurements. In this paper, a technique based on merging process and vibration data is proposed with the objective of improving the detection of mechanical faults in industrial systems working under variable operating conditions. The capabilities of CVA for detection and diagnosis of faults were tested using experimental data acquired from a compressor test rig where different process faults were introduced. Results suggest that the combination of process and vibration data can effectively improve the detectability of mechanical faults in systems working under variable operating conditions.

  2. Friction falls towards zero in quartz rock as slip velocity approaches seismic rates.

    PubMed

    Di Toro, Giulio; Goldsby, David L; Tullis, Terry E

    2004-01-29

    An important unsolved problem in earthquake mechanics is to determine the resistance to slip on faults in the Earth's crust during earthquakes. Knowledge of coseismic slip resistance is critical for understanding the magnitude of shear-stress reduction and hence the near-fault acceleration that can occur during earthquakes, which affects the amount of damage that earthquakes are capable of causing. In particular, a long-unresolved problem is the apparently low strength of major faults, which may be caused by low coseismic frictional resistance. The frictional properties of rocks at slip velocities up to 3 mm s(-1) and for slip displacements characteristic of large earthquakes have been recently simulated under laboratory conditions. Here we report data on quartz rocks that indicate an extraordinary progressive decrease in frictional resistance with increasing slip velocity above 1 mm s(-1). This reduction extrapolates to zero friction at seismic slip rates of approximately 1 m s(-1), and appears to be due to the formation of a thin layer of silica gel on the fault surface: it may explain the low strength of major faults during earthquakes.

  3. Using Performance Tools to Support Experiments in HPC Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian

    2014-01-01

    The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less

  4. A Genuine TEAM Player

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.

  5. An architecture for automated fault diagnosis. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry R.

    1989-01-01

    A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.

  6. Modeling Off-Nominal Behavior in SysML

    NASA Technical Reports Server (NTRS)

    Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan

    2012-01-01

    Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.

  7. The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Tai, Ann T.

    2000-01-01

    The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.

  8. Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.

    PubMed

    Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C

    2015-06-01

    An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.

  9. Demonstration of artificial intelligence technology for transit railcar diagnostics

    DOT National Transportation Integrated Search

    1999-01-01

    This report will be of interest to railcar maintenance professionals concerned with improving railcar maintenance fault-diagnostic capabilities through the use of artificial intelligence (AI) technologies. It documents the results of a demonstration ...

  10. Advanced Protection & Service Restoration for FREEDM Systems

    NASA Astrophysics Data System (ADS)

    Singh, Urvir

    A smart electric power distribution system (FREEDM system) that incorporates DERs (Distributed Energy Resources), SSTs (Solid State Transformers - that can limit the fault current to two times of the rated current) & RSC (Reliable & Secure Communication) capabilities has been studied in this work in order to develop its appropriate protection & service restoration techniques. First, a solution is proposed that can make conventional protective devices be able to provide effective protection for FREEDM systems. Results show that although this scheme can provide required protection but it can be quite slow. Using the FREEDM system's communication capabilities, a communication assisted Overcurrent (O/C) protection scheme is proposed & results show that by using communication (blocking signals) very fast operating times are achieved thereby, mitigating the problem of conventional O/C scheme. Using the FREEDM System's DGI (Distributed Grid Intelligence) capability, an automated FLISR (Fault Location, Isolation & Service Restoration) scheme is proposed that is based on the concept of 'software agents' & uses lesser data (than conventional centralized approaches). Test results illustrated that this scheme is able to provide a global optimal system reconfiguration for service restoration.

  11. Megathrust splay faults at the focus of the Prince William Sound asperity, Alaska

    USGS Publications Warehouse

    Liberty, Lee M.; Finn, Shaun P.; Haeussler, Peter J.; Pratt, Thomas L.; Peterson, Andrew

    2013-01-01

    High-resolution sparker and crustal-scale air gun seismic reflection data, coupled with repeat bathymetric surveys, document a region of repeated coseismic uplift on the portion of the Alaska subduction zone that ruptured in 1964. This area defines the western limit of Prince William Sound. Differencing of vintage and modern bathymetric surveys shows that the region of greatest uplift related to the 1964 Great Alaska earthquake was focused along a series of subparallel faults beneath Prince William Sound and the adjacent Gulf of Alaska shelf. Bathymetric differencing indicates that 12 m of coseismic uplift occurred along two faults that reached the seafloor as submarine terraces on the Cape Cleare bank southwest of Montague Island. Sparker seismic reflection data provide cumulative Holocene slip estimates as high as 9 mm/yr along a series of splay thrust faults within both the inner wedge and transition zone of the accretionary prism. Crustal seismic data show that these megathrust splay faults root separately into the subduction zone décollement. Splay fault divergence from this megathrust correlates with changes in midcrustal seismic velocity and magnetic susceptibility values, best explained by duplexing of the subducted Yakutat terrane rocks above Pacific plate rocks along the trailing edge of the Yakutat terrane. Although each splay fault is capable of independent motion, we conclude that the identified splay faults rupture in a similar pattern during successive megathrust earthquakes and that the region of greatest seismic coupling has remained consistent throughout the Holocene.

  12. Megathrust splay faults at the focus of the Prince William Sound asperity, Alaska

    NASA Astrophysics Data System (ADS)

    Liberty, Lee M.; Finn, Shaun P.; Haeussler, Peter J.; Pratt, Thomas L.; Peterson, Andrew

    2013-10-01

    sparker and crustal-scale air gun seismic reflection data, coupled with repeat bathymetric surveys, document a region of repeated coseismic uplift on the portion of the Alaska subduction zone that ruptured in 1964. This area defines the western limit of Prince William Sound. Differencing of vintage and modern bathymetric surveys shows that the region of greatest uplift related to the 1964 Great Alaska earthquake was focused along a series of subparallel faults beneath Prince William Sound and the adjacent Gulf of Alaska shelf. Bathymetric differencing indicates that 12 m of coseismic uplift occurred along two faults that reached the seafloor as submarine terraces on the Cape Cleare bank southwest of Montague Island. Sparker seismic reflection data provide cumulative Holocene slip estimates as high as 9 mm/yr along a series of splay thrust faults within both the inner wedge and transition zone of the accretionary prism. Crustal seismic data show that these megathrust splay faults root separately into the subduction zone décollement. Splay fault divergence from this megathrust correlates with changes in midcrustal seismic velocity and magnetic susceptibility values, best explained by duplexing of the subducted Yakutat terrane rocks above Pacific plate rocks along the trailing edge of the Yakutat terrane. Although each splay fault is capable of independent motion, we conclude that the identified splay faults rupture in a similar pattern during successive megathrust earthquakes and that the region of greatest seismic coupling has remained consistent throughout the Holocene.

  13. Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan

    2017-09-01

    It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.

  14. Sixty-four-Channel Inline Cable Tester

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Faults in wiring are a serious concern for the aerospace and aeronautics (commercial, military, and civil) industries. A number of accidents have occurred because faulty wiring created shorts or opens that resulted in the loss of control of the aircraft or because arcing led to fires and explosions. Some of these accidents have resulted in the massive loss of lives (such as in the TWA Flight 800 accident). Circuits on the Space Shuttle have also failed because of faulty insulation on wiring. STS-93 lost power when a primary power circuit in one engine failed and a second engine had a backup power circuit fault. Cables are usually tested on the ground after the crew reports a fault encountered during flight. Often such failures result from vibration and cannot be replicated while the aircraft is stationary. It is therefore important to monitor faults while the aircraft is in operation, when cables are more likely to fail. Work is in progress to develop a cable fault tester capable of monitoring up to 64 individual wires simultaneously. Faults can be monitored either inline or offline. In the inline mode of operation, the monitoring is performed without disturbing the normal operation of the wires under test. That is, the operations are performed unintrusively and are essentially undetectable for the test signal levels are below the noise floor. A cable can be monitored several times per second in the offline mode and once a second in the inline mode. The 64-channel inline cable tester not only detects the occurrence of a fault, but also determines the type of fault (short/open) and the location of the fault. This will enable the detection of intermittent faults that can be repaired before they become serious problems.

  15. Institute for Sustained Performance, Energy, and Resilience (SuPER)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jagode, Heike; Bosilca, George; Danalis, Anthony

    The University of Tennessee (UTK) and University of Texas at El Paso (UTEP) partnership supported the three main thrusts of the SUPER project---performance, energy, and resilience. The UTK-UTEP effort thus helped advance the main goal of SUPER, which was to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience. The primary vehicle through which UTK provided performance measurement support to SUPER and the larger HPC community is the Performance Applicationmore » Programming Interface (PAPI). PAPI is an ongoing project that provides a consistent interface and methodology for collecting hardware performance information from various hardware and software components, including most major CPUs, GPUs and accelerators, interconnects, I/O systems, and power interfaces, as well as virtual cloud environments. The PAPI software is widely used for performance modeling of scientific and engineering applications---for example, the HOMME (High Order Methods Modeling Environment) climate code, and the GAMESS and NWChem computational chemistry codes---on DOE supercomputers. PAPI is widely deployed as middleware for use by higher-level profiling, tracing, and sampling tools (e.g., CrayPat, HPCToolkit, Scalasca, Score-P, TAU, Vampir, PerfExpert), making it the de facto standard for hardware counter analysis. PAPI has established itself as fundamental software infrastructure in every application domain (spanning academia, government, and industry), where improving performance can be mission critical. Ultimately, as more application scientists migrate their applications to HPC platforms, they will benefit from the extended capabilities this grant brought to PAPI to analyze and optimize performance in these environments, whether they use PAPI directly, or via third-party performance tools. Capabilities added to PAPI through this grant include support for new architectures such as the lastest GPU and Xeon Phi accelerators, and advanced power measurement and management features. Another important topic for the UTK team was providing support for a rich ecosystem of different fault management strategies in the context of parallel computing. Our long term efforts have been oriented toward proposing flexible strategies and providing building boxes that application developers can use to build the most efficient fault management technique for their application. These efforts span across the entire software spectrum, from theoretical models of existing strategies to easily assess their performance, to algorithmic modifications to take advantage of specific mathematical properties for data redundancy and to extensions to widely used programming paradigms to empower the application developers to deal with all types of faults. We have also continued our tight collaborations with users to help them adopt these technologies to ensure their application always deliver meaningful scientific data. Large supercomputer systems are becoming more and more power and energy constrained, and future systems and applications running on them will need to be optimized to run under power caps and/or minimize energy consumption. The UTEP team contributed to the SUPER energy thrust by developing power modeling methodologies and investigating power management strategies. Scalability modeling results showed that some applications can scale better with respect to an increasing power budget than with respect to only the number of processors. Power management, in particular shifting power to processors on the critical path of an application execution, can reduce perturbation due to system noise and other sources of runtime variability, which are growing problems on large-scale power-constrained computer systems.« less

  16. Effect of faults on fluid flow and chloride contamination in a carbonate aquifer system

    USGS Publications Warehouse

    Maslia, M.L.; Prowell, D.C.

    1990-01-01

    A unified, multidiscipline hypothesis is proposed to explain the anomalous pattern by which chloride has been found in water of the Upper Floridan aquifer in Brunswick, Glynn County, Georgia. Analyses of geophysical, hydraulic, water chemistry, and aquifer test data using the equivalent porous medium (EPM) approach are used to support the hypothesis and to improve further the understanding of the fracture-flow system in this area. Using the data presented herein we show that: (1) four major northeast-southwest trending faults, capable of affecting the flow system of the Upper Floridan aquifer, can be inferred from structural analysis of geophysical data and from regional fault patterns; (2) the proposed faults account for the anomalous northeastward elongation of the potentiometric surface of the Upper Floridan aquifer; (3) the faults breach the nearly impermeable units that confine the Upper Floridan aquifer from below, allowing substantial quantities of water to leak vertically upward; as a result, aquifer transmissivity need not be excessively large (as previously reported) to sustain the heavy, long-term pumpage at Brunswick without developing a steep cone of depression in the potentiometric surface; (4) increased fracturing at the intersection of the faults enhances the development of conduits that allow the upward migration of high-chloride water in response to pumping from the Upper Floridan aquifer; and (5) the anomalous movement of the chloride plume is almost entirely controlled by the faults. ?? 1990.

  17. Improved Sensor Fault Detection, Isolation, and Mitigation Using Multiple Observers Approach

    PubMed Central

    Wang, Zheng; Anand, D. M.; Moyne, J.; Tilbury, D. M.

    2017-01-01

    Traditional Fault Detection and Isolation (FDI) methods analyze a residual signal to detect and isolate sensor faults. The residual signal is the difference between the sensor measurements and the estimated outputs of the system based on an observer. The traditional residual-based FDI methods, however, have some limitations. First, they require that the observer has reached its steady state. In addition, residual-based methods may not detect some sensor faults, such as faults on critical sensors that result in an unobservable system. Furthermore, the system may be in jeopardy if actions required for mitigating the impact of the faulty sensors are not taken before the faulty sensors are identified. The contribution of this paper is to propose three new methods to address these limitations. Faults that occur during the observers' transient state can be detected by analyzing the convergence rate of the estimation error. Open-loop observers, which do not rely on sensor information, are used to detect faults on critical sensors. By switching among different observers, we can potentially mitigate the impact of the faulty sensor during the FDI process. These three methods are systematically integrated with a previously developed residual-based method to provide an improved FDI and mitigation capability framework. The overall approach is validated mathematically, and the effectiveness of the overall approach is demonstrated through simulation on a 5-state suspension system. PMID:28924303

  18. Submarine Neotectonic Investigations of the Bahia Soledad Fault, off Northern Baja California Near the US - Mexico Border

    NASA Astrophysics Data System (ADS)

    Anderson, K.; Lundsten, E. M.; Paull, C. K.; Caress, D. W.; Thomas, H. J.; Maier, K. L.; McGann, M.; Herguera, J. C.; Gwiazda, R.; Arregui, S.; Barrientos, L. A.

    2015-12-01

    The Monterey Bay Aquarium Research Institute (MBARI) conducted detailed surveys at selected sites on the seafloor along the Bahia Soledad Fault offshore of Northern Baja California, Mexico, during a two-ship expedition in the spring of 2015. The Bahia Soledad Fault is a NNW-trending strike-slip fault that is likely continuous with the San Diego Trough Fault offshore of San Diego, California. Constraining the style of deformation, continuity, and slip rate along this fault system is critical to characterizing the seismic hazards to the adjacent coastal areas extending from Los Angeles to Ensenada. Detailed morphologic surveys were conducted using an autonomous underwater vehicle (AUV) to provide ultra high-resolution multibeam bathymetry (vertical precision of 0.15 m and horizontal resolution of 1.0 m). The AUV also carried a 2-10 kHz chirp sub-bottom profiler and an Edgetech 110kHz and 410kHz sidescan. The two sites along the Bahia Soledad Fault each run ~6 km along the fault with ~1.8 km wide footprint. The resulting bathymetry shows these fault zones are marked with distinct lineations that are flanked by ~1 km long elongated ridges and depressions which are interpreted to be transpressional pop-up structures and transtensional pull-apart basins up to 100 m of relief. Offset seismic reflectors that extend to near the seafloor confirm that these lineations are fault scarps. The detailed bathymetric maps and sub-bottom profiles were used to locate key sites where deformed stratigraphic horizons along the fault are within 1.5 m of the seafloor. These areas were sampled using a remotely operated vehicle (ROV) equipped with a vibracoring system capable of collecting precisely located cores that are up to 1.5 m long. The coupled use of multibeam imagery and surgically-collected stratigraphic samples will enable to constrain the frequency and timing of recent movements on this fault which will be useful to incorporated into future seismic hazard assessment.

  19. Fault Tree Analysis as a Planning and Management Tool: A Case Study

    ERIC Educational Resources Information Center

    Witkin, Belle Ruth

    1977-01-01

    Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)

  20. Development of a component centered fault monitoring and diagnosis knowledge based system for space power system

    NASA Technical Reports Server (NTRS)

    Lee, S. C.; Lollar, Louis F.

    1988-01-01

    The overall approach currently being taken in the development of AMPERES (Autonomously Managed Power System Extendable Real-time Expert System), a knowledge-based expert system for fault monitoring and diagnosis of space power systems, is discussed. The system architecture, knowledge representation, and fault monitoring and diagnosis strategy are examined. A 'component-centered' approach developed in this project is described. Critical issues requiring further study are identified.

  1. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  2. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of abort triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of abort triggers.

  3. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of Abort Triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of Abort Triggers.

  4. Software Implemented Fault-Tolerant (SIFT) user's guide

    NASA Technical Reports Server (NTRS)

    Green, D. F., Jr.; Palumbo, D. L.; Baltrus, D. W.

    1984-01-01

    Program development for a Software Implemented Fault Tolerant (SIFT) computer system is accomplished in the NASA LaRC AIRLAB facility using a DEC VAX-11 to interface with eight Bendix BDX 930 flight control processors. The interface software which provides this SIFT program development capability was developed by AIRLAB personnel. This technical memorandum describes the application and design of this software in detail, and is intended to assist both the user in performance of SIFT research and the systems programmer responsible for maintaining and/or upgrading the SIFT programming environment.

  5. Electrical Characterization Laboratory | Energy Systems Integration

    Science.gov Websites

    the ability of electrical equipment to withstand high-voltage surges and high-current faults. A capability. High-Voltage Characterization The high-voltage characterization hub offers a Class 1, Div 2 lab

  6. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  7. Flexray - An Answer to the Challenges Faced by Spacecraft On-Board Communication Protocols

    NASA Astrophysics Data System (ADS)

    Gunes-Lasnet, S.; Furano, G.

    2007-08-01

    The current spacecraft on-board network protocols are facing challenges: They need to consume low power, to handle a high data rate, and eventually need to have real-time capabilities as well as fault-tolerance; all of this at a low cost. Meanwhile, the automotive protocols are showing ever increasing enhanced performances: the automotive industry has shown recent break-throughs in communication networks. Among them, FlexRay targets specifically the next generation of automotive applications allowing Steer-by-Wire and Brake-by- Wire. FlexRay supports a very high data rate, is fault- tolerant, has real-time capabilities, and supports both periodic and aperiodic data transfer on a single bus. Space avionics has benefited from the automotive spin- ins in the past, so could FlexRay answer the space systems demands? This paper aims at demonstrating that space avionics could profit from the use of FlexRay.

  8. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most missions is system complexity due to a need to establish a multi-dimensional structure across hardware, software and spacecraft operations. FM is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. Generally, FM architecture, implementation, and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (V&V) is challenging. A breakout session at the 2012 NASA Independent Verification & Validation (IV&V) Annual Workshop titled "V&V of Fault Management: Challenges and Successes" exposed this issue in terms of V&V for a representative set of architectures. NASA's Software Assurance Research Program (SARP) has provided funds to NASA IV&V to extend the work performed at the Workshop session in partnership with NASA's Jet Propulsion Laboratory (JPL). NASA IV&V will extract FM architectures across the IV&V portfolio and evaluate the data set, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This SARP initiative focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures and associated V&V/IV&V techniques provides a data set that can enable improved assurance that a system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the space community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research.

  9. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research including identification of FM architectures, visibility observations, and methods utilized for VVIVV.

  10. Optimal fault-tolerant control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2017-10-01

    For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.

  11. A recurrent neural-network-based sensor and actuator fault detection and isolation for nonlinear systems with application to the satellite's attitude control subsystem.

    PubMed

    Talebi, H A; Khorasani, K; Tafazoli, S

    2009-01-01

    This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.

  12. Model-based design and experimental verification of a monitoring concept for an active-active electromechanical aileron actuation system

    NASA Astrophysics Data System (ADS)

    Arriola, David; Thielecke, Frank

    2017-09-01

    Electromechanical actuators have become a key technology for the onset of power-by-wire flight control systems in the next generation of commercial aircraft. The design of robust control and monitoring functions for these devices capable to mitigate the effects of safety-critical faults is essential in order to achieve the required level of fault tolerance. A primary flight control system comprising two electromechanical actuators nominally operating in active-active mode is considered. A set of five signal-based monitoring functions are designed using a detailed model of the system under consideration which includes non-linear parasitic effects, measurement and data acquisition effects, and actuator faults. Robust detection thresholds are determined based on the analysis of parametric and input uncertainties. The designed monitoring functions are verified experimentally and by simulation through the injection of faults in the validated model and in a test-rig suited to the actuation system under consideration, respectively. They guarantee a robust and efficient fault detection and isolation with a low risk of false alarms, additionally enabling the correct reconfiguration of the system for an enhanced operational availability. In 98% of the performed experiments and simulations, the correct faults were detected and confirmed within the time objectives set.

  13. A PC based time domain reflectometer for space station cable fault isolation

    NASA Technical Reports Server (NTRS)

    Pham, Michael; McClean, Marty; Hossain, Sabbir; Vo, Peter; Kouns, Ken

    1994-01-01

    Significant problems are faced by astronauts on orbit in the Space Station when trying to locate electrical faults in multi-segment avionics and communication cables. These problems necessitate the development of an automated portable device that will detect and locate cable faults using the pulse-echo technique known as Time Domain Reflectometry. A breadboard time domain reflectometer (TDR) circuit board was designed and developed at the NASA-JSC. The TDR board works in conjunction with a GRiD lap-top computer to automate the fault detection and isolation process. A software program was written to automatically display the nature and location of any possible faults. The breadboard system can isolate open circuit and short circuit faults within two feet in a typical space station cable configuration. Follow-on efforts planned for 1994 will produce a compact, portable prototype Space Station TDR capable of automated switching in multi-conductor cables for high fidelity evaluation. This device has many possible commercial applications, including commercial and military aircraft avionics, cable TV, telephone, communication, information and computer network systems. This paper describes the principle of time domain reflectometry and the methodology for on-orbit avionics utility distribution system repair, utilizing the newly developed device called the Space Station Time Domain Reflectometer (SSTDR).

  14. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.

  15. A design approach for ultrareliable real-time systems

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Harper, Richard E.; Alger, Linda S.

    1991-01-01

    A design approach developed over the past few years to formalize redundancy management and validation is described. Redundant elements are partitioned into individual fault-containment regions (FCRs). An FCR is a collection of components that operates correctly regardless of any arbitrary logical or electrical fault outside the region. Conversely, a fault in an FCR cannot cause hardware outside the region to fail. The outputs of all channels are required to agree bit-for-bit under no-fault conditions (exact bitwise consensus). Synchronization, input agreement, and input validity conditions are discussed. The Advanced Information Processing System (AIPS), which is a fault-tolerant distributed architecture based on this approach, is described. A brief overview of recent applications of these systems and current research is presented.

  16. IVHM Framework for Intelligent Integration for Vehicle Health Management

    NASA Technical Reports Server (NTRS)

    Paris, Deidre; Trevino, Luis C.; Watson, Michael D.

    2005-01-01

    Integrated Vehicle Health Management (IVHM) systems for aerospace vehicles, is the process of assessing, preserving, and restoring system functionality across flight and techniques with sensor and communication technologies for spacecraft that can generate responses through detection, diagnosis, reasoning, and adapt to system faults in support of Integrated Intelligent Vehicle Management (IIVM). These real-time responses allow the IIVM to modify the affected vehicle subsystem(s) prior to a catastrophic event. Furthermore, this framework integrates technologies which can provide a continuous, intelligent, and adaptive health state of a vehicle and use this information to improve safety and reduce costs of operations. Recent investments in avionics, health management, and controls have been directed towards IIVM. As this concept has matured, it has become clear that IIVM requires the same sensors and processing capabilities as the real-time avionics functions to support diagnosis of subsystem problems. New sensors have been proposed, in addition to augment the avionics sensors to support better system monitoring and diagnostics. As the designs have been considered, a synergy has been realized where the real-time avionics can utilize sensors proposed for diagnostics and prognostics to make better real-time decisions in response to detected failures. IIVM provides for a single system allowing modularity of functions and hardware across the vehicle. The framework that supports IIVM consists of 11 major on-board functions necessary to fully manage a space vehicle maintaining crew safety and mission objectives. These systems include the following: Guidance and Navigation; Communications and Tracking; Vehicle Monitoring; Information Transport and Integration; Vehicle Diagnostics; Vehicle Prognostics; Vehicle Mission Planning, Automated Repair and Replacement; Vehicle Control; Human Computer Interface; and Onboard Verification and Validation. Furthermore, the presented framework provides complete vehicle management which not only allows for increased crew safety and mission success through new intelligence capabilities, but also yields a mechanism for more efficient vehicle operations.

  17. Extraction of repetitive transients with frequency domain multipoint kurtosis for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liao, Yuhe; Sun, Peng; Wang, Baoxiang; Qu, Lei

    2018-05-01

    The appearance of repetitive transients in a vibration signal is one typical feature of faulty rolling element bearings. However, accurate extraction of these fault-related characteristic components has always been a challenging task, especially when there is interference from large amplitude impulsive noises. A frequency domain multipoint kurtosis (FDMK)-based fault diagnosis method is proposed in this paper. The multipoint kurtosis is redefined in the frequency domain and the computational accuracy is improved. An envelope autocorrelation function is also presented to estimate the fault characteristic frequency, which is used to set the frequency hunting zone of the FDMK. Then, the FDMK, instead of kurtosis, is utilized to generate a fast kurtogram and only the optimal band with maximum FDMK value is selected for envelope analysis. Negative interference from both large amplitude impulsive noise and shaft rotational speed related harmonic components are therefore greatly reduced. The analysis results of simulation and experimental data verify the capability and feasibility of this FDMK-based method

  18. Study of a unified hardware and software fault-tolerant architecture

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart

    1989-01-01

    A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.

  19. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.

  20. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management

    PubMed Central

    Halicioglu, Kerem; Ozener, Haluk

    2008-01-01

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE–SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters – standard strike-slip model of dislocation theory in an elastic half-space – is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems. PMID:27873783

  1. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management.

    PubMed

    Halicioglu, Kerem; Ozener, Haluk

    2008-08-19

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE-SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters - standard strike-slip model of dislocation theory in an elastic half-space - is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems.

  2. A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity

    NASA Astrophysics Data System (ADS)

    Healy, D.; Rizzo, R. E.

    2015-12-01

    Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.

  3. Fault Tolerance Middleware for a Multi-Core System

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.

    2012-01-01

    Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.

  4. 49 CFR 176.905 - Motor vehicles or mechanical equipment powered by internal combustion engines.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of ignition. A motor vehicle or mechanical equipment showing any signs of leakage or electrical fault... smoke or fire detection system capable of alerting personnel on the bridge. (h) All electrical equipment...

  5. A 13 km Long Paleoseismological Trench in Western Germany

    NASA Astrophysics Data System (ADS)

    Grützner, C. H.; Reicherter, K.; Winandy, J.

    2012-04-01

    The expansion of an open pit lignite mine in this area makes it necessary to translocate one of Germany's most frequented, E-W trending highways for a length of 13 km during the next months and years. By this occasion, one of the largest faults of the Lower Rhine Embayment (LRE), the Rurrand Fault, was already cut in 2010. We applied geological mapping and surface-near geophysical techniques for investigating this possible candidate for the 1756 Düren earthquake (M>6; and considered as the strongest historical earthquake in Germany), and found clear hints for recent active faulting. The LRE in western Germany is one of the seismically most active areas in Central Europe. Earthquakes stronger than M6 have been documented by paleoseismological and archeoseismological investigations and written sources. Instrumental seismicity reached ML5.9 (Mw5.4; April 13th, 1992) in this densely populated area with alone nearby Cologne having more than one million inhabitants. Active faults trend NW-SE in a horst-graben system, parallel to the rivers Rhine and Rur. Recent studies reported that active faults in the study area are characterized by recurrence periods in the order of tens of ka. Those faults in western Germany are often not visible in the field due to relatively high erosion rates and therefore, the seismic hazard might be underestimated. The ongoing highway construction works will cut more (active) faults. We expect at least eight already mapped faults to be cut by the earth works, some of which capable of causing damaging earthquakes judging from their mere length. The construction work is a unique opportunity for paleoseismological investigations at already known, but yet unstudied faults. We hope to gather additional data for an improvement of seismic hazard estimations in Western Germany.

  6. Global Search of Triggered Tectonic Tremor

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Chao, K.; Gonzalez-Huizar, H.; Wang, B.; Ojha, L.; Yang, H.

    2013-05-01

    Deep tectonic tremor has been observed at major plate-boundary faults around the Pacific Rim. While regular or ambient tremor occurs spontaneously or accompanies slow-slip events, tremor could be also triggered by large distant earthquakes and solid earth tides. Because triggered tremor occurs on the same fault patches as ambient tremor and is relatively easy to identify, a systematic global search of triggered tremor could help to identify the physical mechanisms and necessary conditions for tremor generation. Here we conduct a global search of tremor triggered by large teleseismic earthquakes. We mainly focus on major faults with significant strain accumulations where no tremor has been reported before. These includes subduction zones in Central and South America, strike-slip faults around the Caribbean plate, the Queen Charlotte-Fairweather fault system and the Denali fault in the western Canada and Alaska, the Sumatra-Java subduction zone, the Himalaya frontal thrust faults, as well as major strike-slip faults around Tibet. In each region, we first compute the predicted dynamic stresses σd from global earthquakes with magnitude>=5.0 in the past 20 years, and select events with σd > 1 kPa. Next, we download seismic data recorded by stations from local or global seismic networks, and identify triggered tremor as a high-frequency non-impulsive signal that is in phase with the large-amplitude teleseismic waves. In cases where station distributions are dense enough, we also locate tremor based on the standard envelope cross-correlation techniques. Finally, we calculate the triggering potential for the Love and Rayleigh waves with the local fault orientation and surface-wave incident angles. So far we have found several new places that are capable of generating triggered tremor. We will summarize these observations and discuss their implications on physical mechanisms of tremor and remote triggering.

  7. Fault ride-through enhancement using an enhanced field oriented control technique for converters of grid connected DFIG and STATCOM for different types of faults.

    PubMed

    Ananth, D V N; Nagesh Kumar, G V

    2016-05-01

    With increase in electric power demand, transmission lines were forced to operate close to its full load and due to the drastic change in weather conditions, thermal limit is increasing and the system is operating with less security margin. To meet the increased power demand, a doubly fed induction generator (DFIG) based wind generation system is a better alternative. For improving power flow capability and increasing security STATCOM can be adopted. As per modern grid rules, DFIG needs to operate without losing synchronism called low voltage ride through (LVRT) during severe grid faults. Hence, an enhanced field oriented control technique (EFOC) was adopted in Rotor Side Converter of DFIG converter to improve power flow transfer and to improve dynamic and transient stability. A STATCOM is coordinated to the system for obtaining much better stability and enhanced operation during grid fault. For the EFOC technique, rotor flux reference changes its value from synchronous speed to zero during fault for injecting current at the rotor slip frequency. In this process DC-Offset component of flux is controlled, decomposition during symmetric and asymmetric faults. The offset decomposition of flux will be oscillatory in a conventional field oriented control, whereas in EFOC it was aimed to damp quickly. This paper mitigates voltage and limits surge currents to enhance the operation of DFIG during symmetrical and asymmetrical faults. The system performance with different types of faults like single line to ground, double line to ground and triple line to ground was applied and compared without and with a STATCOM occurring at the point of common coupling with fault resistance of a very small value at 0.001Ω. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Crustal structure and fault geometry of the 2010 Haiti earthquake from temporary seismometer deployments

    USGS Publications Warehouse

    Douilly, Roby; Haase, Jennifer S.; Ellsworth, William L.; Bouin, Marie‐Paule; Calais, Eric; Symithe, Steeve J.; Armbruster, John G.; Mercier de Lépinay, Bernard; Deschamps, Anne; Mildor, Saint‐Louis; Meremonte, Mark E.; Hough, Susan E.

    2013-01-01

    Haiti has been the locus of a number of large and damaging historical earthquakes. The recent 12 January 2010 Mw 7.0 earthquake affected cities that were largely unprepared, which resulted in tremendous losses. It was initially assumed that the earthquake ruptured the Enriquillo Plantain Garden fault (EPGF), a major active structure in southern Haiti, known from geodetic measurements and its geomorphic expression to be capable of producing M 7 or larger earthquakes. Global Positioning Systems (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data, however, showed that the event ruptured a previously unmapped fault, the Léogâne fault, a north‐dipping oblique transpressional fault located immediately north of the EPGF. Following the earthquake, several groups installed temporary seismic stations to record aftershocks, including ocean‐bottom seismometers on either side of the EPGF. We use data from the complete set of stations deployed after the event, on land and offshore, to relocate all aftershocks from 10 February to 24 June 2010, determine a 1D regional crustal velocity model, and calculate focal mechanisms. The aftershock locations from the combined dataset clearly delineate the Léogâne fault, with a geometry close to that inferred from geodetic data. Its strike and dip closely agree with the global centroid moment tensor solution of the mainshock but with a steeper dip than inferred from previous finite fault inversions. The aftershocks also delineate a structure with shallower southward dip offshore and to the west of the rupture zone, which could indicate triggered seismicity on the offshore Trois Baies reverse fault. We use first‐motion focal mechanisms to clarify the relationship of the fault geometry to the triggered aftershocks.

  9. Late Quaternary faulting in the Vallo di Diano basin (southern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Villani, F.; Pierdominici, S.; Cinti, F. R.

    2009-12-01

    The Vallo di Diano is the largest Quaternary extensional basin in the southern Apennines thrust-belt axis (Italy). This portion of the chain is highly seismic and is currently subject to NE-extension, which triggers large (M> 6) normal-faulting earthquakes along NW-trending faults. The eastern edge of the Vallo di Diano basin is bounded by an extensional fault system featuring three main NW-trending, SW-dipping, right-stepping, ~15-17 km long segments (from north to south: Polla, Atena Lucana-Sala Consilina and Padula faults). Holocene activity has been documented so far only for the Polla segment. We have therefore focused our geomorphological and paleoseismological study on the southern portion of the system, particularly along the ~ 4 km long Atena Lucana-Sala Consilina and Padula faults overlap zone. The latter is characterized by a complex system of coalescent alluvial fans, Middle Pleistocene to Holocene in age. Here we recognized a > 4 km long and 0.5-1.4 km wide set of scarps (ranging in height between 1 m and 2.5 m) affecting Late Pleistocene - Holocene alluvial fans. In the same area, two Late Pleistocene volcanoclastic layers at the top of an alluvial fan exposed in a quarry are affected by ~ 1 m normal displacements. Moreover, a trench excavated across a 2 m high scarp affecting a Holocene fan revealed warping of Late Holocene debris flow deposits, with a total vertical throw of about 0.3 m. We therefore infer the overlap zone of the Atena Lucana-Sala Consilina and Padula faults is a breached relay ramp, generated by hard-linkage of the two fault segments since Late Pleistocene. This ~ 32 km long fault system is active and is capable of generating Mw ≥6.5 earthquakes.

  10. Characterising the range of seismogenic behaviour on detachment faults - the case of 13o20'N, Mid Atlantic Ridge.

    NASA Astrophysics Data System (ADS)

    Craig, T. J.; Parnell-Turner, R.

    2017-12-01

    Extension at slow- and intermediate-spreading mid-ocean ridges is commonly accommodated through slip on long-lived detachment faults. These curved, convex-upward faults consist of a steeply-dipping section thought to be rooted in the lower crust or upper mantle which rotates to progressively shallower dip-angles at shallower depths, resulting in a domed, sub-horizontal oceanic core complex at the seabed. Although it is accepted that detachment faults can accumulate kilometre-scale offsets over millions of years, the mechanism of slip, and their capacity to sustain the shear stresses necessary to produce large earthquakes, remains debated. In this presentation we will show a comprehensive seismological study of an active oceanic detachment fault system on the Mid-Atlantic Ridge near 13o20'N, combining the results from a local ocean-bottom seismograph deployment with waveform inversion of a series of larger, teleseismically-observed earthquakes. The coincidence of these two datasets provides a more complete characterisation of rupture on the fault, from its initial beginnings within the uppermost mantle to its exposure at the surface. Our results demonstrate that although slip on the steeply-dipping portion of detachment fault is accommodated by failure in numerous microearthquakes, the shallower-dipping section of the fault within the upper few kilometres is relatively strong, and is capable of producing large-magnitude earthquakes. Slip on the shallow portion of active detachment faults at relatively low angles may therefore account for many more large-magnitude earthquakes at mid-ocean ridges than previously thought, and suggests that the lithospheric strength at slow-spreading mid-ocean ridges may be concentrated at shallow depths.

  11. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    NASA Astrophysics Data System (ADS)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a 2D fault model, where we investigate different feedback mechanisms and their effect on seismicity evolution. We introduce an approach to estimate the state of a fault and thus its capability of generating a large (system-wide) event assuming likely heterogeneous distributions of hypocenters and stresses, respectively.

  12. Deep crustal faults and the origin and long-term flank stability of Mt. Etna : First results from the CIRCEE cruise (Oct. 2013)

    NASA Astrophysics Data System (ADS)

    Gutscher, Marc-Andre; Dominguez, Stephane; Mercier de Lepinay, Bernard; Pinheiro, Luis; Babonneau, Nathalie; Cattaneo, Antonio; LeFaou, Yann; Barreca, Giovanni; Micallef, Aaron; Rovere, Marzia

    2014-05-01

    The relation between deep crustal faults and the origin of Mount Etna, the largest and most active volcano in Europe has long been suspected due to its unusual geodynamic location. Results from a new marine geophysical survey offshore Eastern Sicily reveal the detailed geometry (location, length, dip and orientation) of a two-branched 200-km long, lithospheric scale fault system, long sought for as being the cause of Mount Etna. Using high-resolution bathymetry and seismic profiling, we image a 60-km long, previously unidentified, NW trending fault with evidence of recent displacement at the seafloor, offsetting Holocene sediments. This newly identified fault connects NE of Catania, to a known 40-km long, offshore-onshore fault system dissecting the southeastern flank of Mount Etna, generally interpreted as purely gravitational collapse structures. Geological and morphological field studies together with earthquake focal mechanisms indicate active dextral strike-slip motion along the onshore and shallow offshore portion of this 40 + 60 km long segment. The southern 100 km branch of the fault is associated with a sub-vertical lithospheric scale tear fault showing pure down to the East normal faulting and a 500+m thick elongate basin marked by syn-tectonic Plio-quaternary sediment fill. Together they represent two kinematically distinct strands of the long sought "STEP" (Subduction Tear Edge Propagator) fault, whose expression at depth controls the position of Mount Etna. Both 100-km long branches of the fault system are mechanically capable of generating magnitude 7 earthquakes (e.g. - like the 1693 Catania earthquake, the strongest in Italian history, causing 40,000 deaths). We conclude this deep-rooted lithospheric weakness guides gradual down slope creep of Mount Etna and may lead to long-term catastrophic flank collapse with associated tsunami by large-scale mass wasting.

  13. A probabilistic method to diagnose faults of air handling units

    NASA Astrophysics Data System (ADS)

    Dey, Debashis

    Air handling unit (AHU) is one of the most extensively used equipment in large commercial buildings. This device is typically customized and lacks quality system integration which can result in hardwire failures and controller errors. Air handling unit Performance Assessment Rules (APAR) is a fault detection tool that uses a set of expert rules derived from mass and energy balances to detect faults in air handling units. APAR is computationally simple enough that it can be embedded in commercial building automation and control systems and relies only upon sensor data and control signals that are commonly available in these systems. Although APAR has many advantages over other methods, for example no training data required and easy to implement commercially, most of the time it is unable to provide the diagnosis of the faults. For instance, a fault on temperature sensor could be fixed bias, drifting bias, inappropriate location, complete failure. Also a fault in mixing box can be return and outdoor damper leak or stuck. In addition, when multiple rules are satisfied the list of faults increases. There is no proper way to have the correct diagnosis for rule based fault detection system. To overcome this limitation we proposed Bayesian Belief Network (BBN) as a diagnostic tool. BBN can be used to simulate diagnostic thinking of FDD experts through a probabilistic way. In this study we developed a new way to detect and diagnose faults in AHU through combining APAR rules and Bayesian Belief network. Bayesian Belief Network is used as a decision support tool for rule based expert system. BBN is highly capable to prioritize faults when multiple rules are satisfied simultaneously. Also it can get information from previous AHU operating conditions and maintenance records to provide proper diagnosis. The proposed model is validated with real time measured data of a campus building at University of Texas at San Antonio (UTSA).The results show that BBN is correctly able to prioritize faults which can be verified by manual investigation.

  14. Depth-varying seismogenesis on an oceanic detachment fault at 13°20‧N on the Mid-Atlantic Ridge

    NASA Astrophysics Data System (ADS)

    Craig, Timothy J.; Parnell-Turner, Ross

    2017-12-01

    Extension at slow- and intermediate-spreading mid-ocean ridges is commonly accommodated through slip on long-lived faults called oceanic detachments. These curved, convex-upward faults consist of a steeply-dipping section thought to be rooted in the lower crust or upper mantle which rotates to progressively shallower dip-angles at shallower depths. The commonly-observed result is a domed, sub-horizontal oceanic core complex at the seabed. Although it is accepted that detachment faults can accumulate kilometre-scale offsets over millions of years, the mechanism of slip, and their capacity to sustain the shear stresses necessary to produce large earthquakes, remains subject to debate. Here we present a comprehensive seismological study of an active oceanic detachment fault system on the Mid-Atlantic Ridge near 13°20‧N, combining the results from a local ocean-bottom seismograph deployment with waveform inversion of a series of larger teleseismically-observed earthquakes. The unique coincidence of these two datasets provides a comprehensive definition of rupture on the fault, from the uppermost mantle to the seabed. Our results demonstrate that although slip on the deep, steeply-dipping portion of detachment faults is accommodated by failure in numerous microearthquakes, the shallow, gently-dipping section of the fault within the upper few kilometres is relatively strong, and is capable of producing large-magnitude earthquakes. This result brings into question the current paradigm that the shallow sections of oceanic detachment faults are dominated by low-friction mineralogies and therefore slip aseismically, but is consistent with observations from continental detachment faults. Slip on the shallow portion of active detachment faults at relatively low angles may therefore account for many more large-magnitude earthquakes at mid-ocean ridges than previously thought, and suggests that the lithospheric strength at slow-spreading mid-ocean ridges may be concentrated at shallow depths.

  15. Digital patient: Personalized and translational data management through the MyHealthAvatar EU project.

    PubMed

    Kondylakis, Haridimos; Spanakis, Emmanouil G; Sfakianakis, Stelios; Sakkalis, Vangelis; Tsiknakis, Manolis; Marias, Kostas; Xia Zhao; Hong Qing Yu; Feng Dong

    2015-08-01

    The advancements in healthcare practice have brought to the fore the need for flexible access to health-related information and created an ever-growing demand for the design and the development of data management infrastructures for translational and personalized medicine. In this paper, we present the data management solution implemented for the MyHealthAvatar EU research project, a project that attempts to create a digital representation of a patient's health status. The platform is capable of aggregating several knowledge sources relevant for the provision of individualized personal services. To this end, state of the art technologies are exploited, such as ontologies to model all available information, semantic integration to enable data and query translation and a variety of linking services to allow connecting to external sources. All original information is stored in a NoSQL database for reasons of efficiency and fault tolerance. Then it is semantically uplifted through a semantic warehouse which enables efficient access to it. All different technologies are combined to create a novel web-based platform allowing seamless user interaction through APIs that support personalized, granular and secure access to the relevant information.

  16. Microstructures imply cataclasis and authigenic mineral formation control geomechanical properties of New Zealand's Alpine Fault

    NASA Astrophysics Data System (ADS)

    Schuck, B.; Janssen, C.; Schleicher, A. M.; Toy, V. G.; Dresen, G.

    2018-05-01

    The Alpine Fault is capable of generating large (MW > 8) earthquakes and is the main geohazard on South Island, NZ, and late in its 250-291-year seismic cycle. To minimize its hazard potential, it is indispensable to identify and understand the processes influencing the geomechanical behavior and strength-evolution of the fault. High-resolution microstructural, mineralogical and geochemical analyses of the Alpine Fault's core demonstrate wall rock fragmentation, assisted by mineral dissolution, and cementation resulting in the formation of a fine-grained principal slip zone (PSZ). A complex network of anastomosing and mutually cross-cutting calcite veins implies that faulting occurred during episodes of dilation, slip and sealing. Fluid-assisted dilatancy leads to a significant volume increase accommodated by vein formation in the fault core. Undeformed euhedral chlorite crystals and calcite veins that have cut footwall gravels demonstrate that these processes occurred very close to the Earth's surface. Microstructural evidence indicates that cataclastic processes dominate the deformation and we suggest that powder lubrication and grain rolling, particularly influenced by abundant nanoparticles, play a key role in the fault core's velocity-weakening behavior rather than frictional sliding. This is further supported by the absence of smectite, which is reasonable given recently measured geothermal gradients of more than 120 °C km-1 and the impermeable nature of the PSZ, which both limit the growth of this phase and restrict its stability to shallow depths. Our observations demonstrate that high-temperature fluids can influence authigenic mineral formation and thus control the fault's geomechanical behavior and the cyclic evolution of its strength.

  17. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  18. Product quality management based on CNC machine fault prognostics and diagnosis

    NASA Astrophysics Data System (ADS)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  19. A global spacecraft control network for spacecraft autonomy research

    NASA Technical Reports Server (NTRS)

    Kitts, Christopher A.

    1996-01-01

    The development and implementation of the Automated Space System Experimental Testbed (ASSET) space operations and control network, is reported on. This network will serve as a command and control architecture for spacecraft operations and will offer a real testbed for the application and validation of advanced autonomous spacecraft operations strategies. The proposed network will initially consist of globally distributed amateur radio ground stations at locations throughout North America and Europe. These stations will be linked via Internet to various control centers. The Stanford (CA) control center will be capable of human and computer based decision making for the coordination of user experiments, resource scheduling and fault management. The project's system architecture is described together with its proposed use as a command and control system, its value as a testbed for spacecraft autonomy research, and its current implementation.

  20. Distributed processing of a GPS receiver network for a regional ionosphere map

    NASA Astrophysics Data System (ADS)

    Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun

    2018-01-01

    This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.

  1. Human Factors Considerations for Safe Recovery from Faults In Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy; Belcastro, C. M. (Technical Monitor)

    2003-01-01

    It is now possible - and important - to develop systems to help resolve Flight Control System (FCS) faults. From a human factors viewpoint, it is imperative that these systems take on roles, and provide functions, that are the most supportive to the pilot, given the stress, time pressure and workload they may experience following a FCS fault. FCS fault recovery systems may provide several different functions, including alerting, control assistance, and decision aiding. The biggest human factors questions are in the role suitable for the technology, and its specific functioning to achieve that role. Specifically, for these systems to be effective, they must meet the fundamental requirements that (1) they alert pilots to problems early enough that the pilot can reasonably resolve the fault and regain control of the aircraft and that (2) if the aircraft s handling qualities are severely degraded the HMS provide the appropriate stability augmentation to help the pilot stabilize and control the aircraft. This project undertook several research steps to develop such systems, focusing on the capabilities of pilots and on realistically attainable technologies. The ability to estimate which functions are the most valuable will help steer system development in the directions that can establish the highest safety levels.

  2. Along-strike variations of the partitioning of convergence across the Haiyuan fault system detected by InSAR

    NASA Astrophysics Data System (ADS)

    Daout, S.; Jolivet, R.; Lasserre, C.; Doin, M.-P.; Barbot, S.; Tapponnier, P.; Peltzer, G.; Socquet, A.; Sun, J.

    2016-04-01

    Oblique convergence across Tibet leads to slip partitioning with the coexistence of strike-slip, normal and thrust motion on major fault systems. A key point is to understand and model how faults interact and accumulate strain at depth. Here, we extract ground deformation across the Haiyuan Fault restraining bend, at the northeastern boundary of the Tibetan plateau, from Envisat radar data spanning the 2001-2011 period. We show that the complexity of the surface displacement field can be explained by the partitioning of a uniform deep-seated convergence. Mountains and sand dunes in the study area make the radar data processing challenging and require the latest developments in processing procedures for Synthetic Aperture Radar interferometry. The processing strategy is based on a small baseline approach. Before unwrapping, we correct for atmospheric phase delays from global atmospheric models and digital elevation model errors. A series of filtering steps is applied to improve the signal-to-noise ratio across high ranges of the Tibetan plateau and the phase unwrapping capability across the fault, required for reliable estimate of fault movement. We then jointly invert our InSAR time-series together with published GPS displacements to test a proposed long-term slip-partitioning model between the Haiyuan and Gulang left-lateral Faults and the Qilian Shan thrusts. We explore the geometry of the fault system at depth and associated slip rates using a Bayesian approach and test the consistency of present-day geodetic surface displacements with a long-term tectonic model. We determine a uniform convergence rate of 10 [8.6-11.5] mm yr-1 with an N89 [81-97]°E across the whole fault system, with a variable partitioning west and east of a major extensional fault-jog (the Tianzhu pull-apart basin). Our 2-D model of two profiles perpendicular to the fault system gives a quantitative understanding of how crustal deformation is accommodated by the various branches of this thrust/strike-slip fault system and demonstrates how the geometry of the Haiyuan fault system controls the partitioning of the deep secular motion.

  3. Ground Deformation near active faults in the Kinki district, southwest Japan, detected by InSAR

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Ozawa, T.

    2016-12-01

    The Kinki district, southwest Japan, consists of ranges and plains between which active faults reside. The Osaka plain is in the middle of this district and is surrounded by the Rokko, Arima-Takatsuki, Ikoma, Kongo and Median Tectonic Line fault zones in the clockwise order. These faults are considered to be capable to generate earthquakes of larger magnitude than 7. The 1995 Kobe earthquake is the most recent activity of the Rokko fault (NE-SW trending dextral fault). Therefore the monitoring of ground deformation with high spatial resolution is essential to evaluate seismic hazards in this area. We collected and analyzed available SAR images such as ERS-1/2, Envisat, JERS-1, TerraSAR-X, ALOS/PALSAR and ALOS-2/PALSAR-2 to reveal ground deformation during these 20 years. We made DInSAR and PSInSAR analyses of these images using ASTER-GDEM ver.2. We detected three spots of subsidence along the Arima-Takatsuki fault (ENE-WSW trending dextral fault, east neighbor of the Rokko fault) after the Kobe earthquake, which continued up to 2010. Two of them started right after the Kobe earthquake, while the easternmost one was observed after 2000. However, we did not find them in the interferograms of ALOS-2/PALSAR-2 acquired during 2014 - 2016. Marginal uplift was recognized along the eastern part of the Rokko fault. PS-InSAR results of ALOS/PALSAR also revealed slight uplift north of the Rokko Mountain that uplift by 20 cm coseismically. These observations suggest that the Rokko Mountain might have uplifted during the postseismic period. We found subsidence on the eastern frank of the Kongo Mountain, where the Kongo fault (N-S trending reverse fault) exits. In the southern neighbor of the Median Tectonic Line (ENE-WSW trending dextral fault), uplift of > 5 mm/yr was found by Envisat and ALOS/PALSAR images. This area is shifted westward by 4 mm/yr as well. Since this area is located east of a seismically active area in the northwestern Wakayama prefecture, this deformation may generate E-W compressive stress, which is dominant in focal mechanism of most earthquakes, in the epicentral area.

  4. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  5. Groundwater management in Cusco region, Peru Present and future challenges

    NASA Astrophysics Data System (ADS)

    Guttman, Joseph; Berger, Diego; Pumayalli Saloma, Rene

    2013-04-01

    The agriculture in the rural areas in the Andes Mountains at Cusco region-Peru is mainly rain fed agriculture and basically concentrated to one crop season per year. This situation limits the farmer's development. In order to increase the agricultural season into the winter period (May to November) also known as the dry season, many farmers are pumping water from streams or underground water that unfortunately leads to many of them becoming dry during the winter/dry season. In addition, some of those streams are polluted by the city's wastewater and heavy metals that are released from mines which are quite abundant in the Andes Mountains. The regional government through its engineering organization "Per Plan Meriss Inka", is trying to increase the water quantity and quality to the end users (farmers in the valleys) by promoting projects that among others include capturing of springs that emerge from the high mountain ridges, diverting streams and harvesting surface reservoirs. In the Ancahuasi area (Northwest of Cusco) are many springs that emerge along several geological faults that act as a border line between the permeable layers (mostly sandstone) in the upper throw of the fault and impermeable layers in the lower throw of the fault. The discharge of the springs varies in dependence to the size of each catchment area or aquifer structure. The spring water is collected in small pools and then by gravity through open channels to the farmers in the valleys. During the past 25 years, in some places, springs have been captured by horizontal wells (gallery) that were excavated from the fault zone into the mountain a few tens of meters below the spring outlet. The gallery drains excess water from the spring storage and increases the overall discharge. The galleries are a limited solution to individual places where the geology, hydrology and the topography enable it. The farmers are using flood irrigation systems which according to World Bank documents, the overall efficiency of such irrigation systems is about 35% (most of the water recharges to the underground or is lost by evaporation). Slightly increasing the efficiency by only 10-20% together with bringing additional water would cause a dramatic change in the farmer's life and in their income. A Pre-feasibility study indicates that there are deeper subsurface groundwater systems that flow from the Andes Mountain downstream to the valleys. The deeper systems are most probably separated from the spring systems. The deeper groundwater systems are flowing from the Andes Mountains downstream via individual paths, in places where both sides of the faults contain permeable layers and through several alluvial fans. Detailed researches are planned in the next few years to identify those individual sites and to locate sites for drilling boreholes (observation and production). Today, an integrated water resources management at the local and regional level is lacking. The feasibility studies will include recommendations to the regional government on how to implement such an integrated management program together with capacity building of the institutional capability of regional governments.

  6. A divide and conquer approach to cope with uncertainty, human health risk, and decision making in contaminant hydrology

    NASA Astrophysics Data System (ADS)

    de Barros, Felipe P. J.; Bolster, Diogo; Sanchez-Vila, Xavier; Nowak, Wolfgang

    2011-05-01

    Assessing health risk in hydrological systems is an interdisciplinary field. It relies on the expertise in the fields of hydrology and public health and needs powerful translation concepts to provide decision support and policy making. Reliable health risk estimates need to account for the uncertainties and variabilities present in hydrological, physiological, and human behavioral parameters. Despite significant theoretical advancements in stochastic hydrology, there is still a dire need to further propagate these concepts to practical problems and to society in general. Following a recent line of work, we use fault trees to address the task of probabilistic risk analysis and to support related decision and management problems. Fault trees allow us to decompose the assessment of health risk into individual manageable modules, thus tackling a complex system by a structural divide and conquer approach. The complexity within each module can be chosen individually according to data availability, parsimony, relative importance, and stage of analysis. Three differences are highlighted in this paper when compared to previous works: (1) The fault tree proposed here accounts for the uncertainty in both hydrological and health components, (2) system failure within the fault tree is defined in terms of risk being above a threshold value, whereas previous studies that used fault trees used auxiliary events such as exceedance of critical concentration levels, and (3) we introduce a new form of stochastic fault tree that allows us to weaken the assumption of independent subsystems that is required by a classical fault tree approach. We illustrate our concept in a simple groundwater-related setting.

  7. Fault Management Practice: A Roadmap for Improvement

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  8. Multi-Agent Diagnosis and Control of an Air Revitalization System for Life Support in Space

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Kowing, Jeffrey; Nieten, Joseph; Graham, Jeffrey s.; Schreckenghost, Debra; Bonasso, Pete; Fleming, Land D.; MacMahon, Matt; Thronesbery, Carroll

    2000-01-01

    An architecture of interoperating agents has been developed to provide control and fault management for advanced life support systems in space. In this adjustable autonomy architecture, software agents coordinate with human agents and provide support in novel fault management situations. This architecture combines the Livingstone model-based mode identification and reconfiguration (MIR) system with the 3T architecture for autonomous flexible command and control. The MIR software agent performs model-based state identification and diagnosis. MIR identifies novel recovery configurations and the set of commands required for the recovery. The AZT procedural executive and the human operator use the diagnoses and recovery recommendations, and provide command sequencing. User interface extensions have been developed to support human monitoring of both AZT and MIR data and activities. This architecture has been demonstrated performing control and fault management for an oxygen production system for air revitalization in space. The software operates in a dynamic simulation testbed.

  9. Time-frequency analysis based on ensemble local mean decomposition and fast kurtogram for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-03-01

    A time-frequency analysis method based on ensemble local mean decomposition (ELMD) and fast kurtogram (FK) is proposed for rotating machinery fault diagnosis. Local mean decomposition (LMD), as an adaptive non-stationary and nonlinear signal processing method, provides the capability to decompose multicomponent modulation signal into a series of demodulated mono-components. However, the occurring mode mixing is a serious drawback. To alleviate this, ELMD based on noise-assisted method was developed. Still, the existing environmental noise in the raw signal remains in corresponding PF with the component of interest. FK has good performance in impulse detection while strong environmental noise exists. But it is susceptible to non-Gaussian noise. The proposed method combines the merits of ELMD and FK to detect the fault for rotating machinery. Primarily, by applying ELMD the raw signal is decomposed into a set of product functions (PFs). Then, the PF which mostly characterizes fault information is selected according to kurtosis index. Finally, the selected PF signal is further filtered by an optimal band-pass filter based on FK to extract impulse signal. Fault identification can be deduced by the appearance of fault characteristic frequencies in the squared envelope spectrum of the filtered signal. The advantages of ELMD over LMD and EEMD are illustrated in the simulation analyses. Furthermore, the efficiency of the proposed method in fault diagnosis for rotating machinery is demonstrated on gearbox case and rolling bearing case analyses.

  10. Construction of testing facilities and verifying tests of a 22.9 kV/630 A class superconducting fault current limiter

    NASA Astrophysics Data System (ADS)

    Yim, S.-W.; Yu, S.-D.; Kim, H.-R.; Kim, M.-J.; Park, C.-R.; Yang, S.-E.; Kim, W.-S.; Hyun, O.-B.; Sim, J.; Park, K.-B.; Oh, I.-S.

    2010-11-01

    We have constructed and completed the preparation for a long-term operation test of a superconducting fault current limiter (SFCL) in a Korea Electric Power Corporation (KEPCO) test grid. The SFCL with rating of 22.9 kV/630 A, 3-phases, has been connected to the 22.9 kV test grid equipped with reclosers and other protection devices in Gochang Power Testing Center of KEPCO. The main goals of the test are the verification of SFCL performance and protection coordination studies. A line-commutation type SFCL was fabricated and installed for this project, and the superconducting components were cooled by a cryo-cooler to 77 K in the sub-cooled liquid nitrogen pressurized by 3 bar of helium gas. The verification test includes un-manned - long-term operation with and without loads and fault tests. Since the test site is 170 km away from the laboratory, we will adopt the un-manned operation with real-time remote monitoring and controlling using high speed internet. For the fault tests, we will apply fault currents up to around 8 kArms to the SFCL using an artificial fault generator. The fault tests may allow us not only to confirm the current limiting capability of the SFCL, but also to adjust the SFCL - recloser coordination such as resetting over-current relay parameters. This paper describes the construction of the testing facilities and discusses the plans for the verification tests.

  11. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1992-01-01

    Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.

  12. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    NASA Astrophysics Data System (ADS)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.

  13. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  14. Mission Adaptive Uas Capabilities for Earth Science and Resource Assessment

    NASA Astrophysics Data System (ADS)

    Dunagan, S.; Fladeland, M.; Ippolito, C.; Knudson, M.; Young, Z.

    2015-04-01

    Unmanned aircraft systems (UAS) are important assets for accessing high risk airspace and incorporate technologies for sensor coordination, onboard processing, tele-communication, unconventional flight control, and ground based monitoring and optimization. These capabilities permit adaptive mission management in the face of complex requirements and chaotic external influences. NASA Ames Research Center has led a number of Earth science remote sensing missions directed at the assessment of natural resources and here we describe two resource mapping problems having mission characteristics requiring a mission adaptive capability extensible to other resource assessment challenges. One example involves the requirement for careful control over solar angle geometry for passive reflectance measurements. This constraint exists when collecting imaging spectroscopy data over vegetation for time series analysis or for the coastal ocean where solar angle combines with sea state to produce surface glint that can obscure the signal. Furthermore, the primary flight control imperative to minimize tracking error should compromise with the requirement to minimize aircraft motion artifacts in the spatial measurement distribution. A second example involves mapping of natural resources in the Earth's crust using precision magnetometry. In this case the vehicle flight path must be oriented to optimize magnetic flux gradients over a spatial domain having continually emerging features, while optimizing the efficiency of the spatial mapping task. These requirements were highlighted in recent Earth Science missions including the OCEANIA mission directed at improving the capability for spectral and radiometric reflectance measurements in the coastal ocean, and the Surprise Valley Mission directed at mapping sub-surface mineral composition and faults, using high-sensitivity magnetometry. This paper reports the development of specific aircraft control approaches to incorporate the unusual and demanding requirements to manage solar angle, aircraft attitude and flight path orientation, and efficient (directly geo-rectified) surface and sub-surface mapping, including the near-time optimization of these sometimes competing requirements.

  15. A shallow fault-zone structure illuminated by trapped waves in the Karadere-Duzce branch of the North Anatolian Fault, western Turkey

    USGS Publications Warehouse

    Ben-Zion, Y.; Peng, Z.; Okaya, D.; Seeber, L.; Armbruster, J.G.; Ozer, N.; Michael, A.J.; Baris, S.; Aktar, M.

    2003-01-01

    We discuss the subsurface structure of the Karadere-Duzce branch of the North Anatolian Fault based on analysis of a large seismic data set recorded by a local PASSCAL network in the 6 months following the Mw = 7.4 1999 Izmit earthquake. Seismograms observed at stations located in the immediate vicinity of the rupture zone show motion amplification and long-period oscillations in both P- and S-wave trains that do not exist in nearby off-fault stations. Examination of thousands of waveforms reveals that these characteristics are commonly generated by events that are well outside the fault zone. The anomalous features in fault-zone seismograms produced by events not necessarily in the fault may be referred to generally as fault-zone-related site effects. The oscillatory shear wave trains after the direct S arrival in these seismograms are analysed as trapped waves propagating in a low-velocity fault-zone layer. The time difference between the S arrival and trapped waves group does not grow systematically with increasing source-receiver separation along the fault. These observations imply that the trapping of seismic energy in the Karadere-Duzce rupture zone is generated by a shallow fault-zone layer. Traveltime analysis and synthetic waveform modelling indicate that the depth of the trapping structure is approximately 3-4 km. The synthetic waveform modelling indicates further that the shallow trapping structure has effective waveguide properties consisting of thickness of the order of 100 m, a velocity decrease relative to the surrounding rock of approximately 50 per cent and an S-wave quality factor of 10-15. The results are supported by large 2-D and 3-D parameter space studies and are compatible with recent analyses of trapped waves in a number of other faults and rupture zones. The inferred shallow trapping structure is likely to be a common structural element of fault zones and may correspond to the top part of a flower-type structure. The motion amplification associated with fault-zone-related site effects increases the seismic shaking hazard near fault-zone structures. The effect may be significant since the volume of sources capable of generating motion amplification in shallow trapping structures is large.

  16. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  17. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  18. Using graphics and expert system technologies to support satellite monitoring at the NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Hughes, Peter M.; Shirah, Gregory W.; Luczak, Edward C.

    1994-01-01

    At NASA's Goddard Space Flight Center, fault-isolation expert systems have been developed to support data monitoring and fault detection tasks in satellite control centers. Based on the lessons learned during these efforts in expert system automation, a new domain-specific expert system development tool named the Generic Spacecraft Analysts Assistant (GenSAA), was developed to facilitate the rapid development and reuse of real-time expert systems to serve as fault-isolation assistants for spacecraft analysts. This paper describes GenSAA's capabilities and how it is supporting monitoring functions of current and future NASA missions for a variety of satellite monitoring applications ranging from subsystem health and safety to spacecraft attitude. Finally, this paper addresses efforts to generalize GenSAA's data interface for more widespread usage throughout the space and commercial industry.

  19. Analysis of Space Shuttle Ground Support System Fault Detection, Isolation, and Recovery Processes and Resources

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Gerald-Yamasaki, Michael; Trent, Robert P.

    2009-01-01

    As part of the FDIR (Fault Detection, Isolation, and Recovery) Project for the Constellation Program, a task was designed within the context of the Constellation Program FDIR project called the Legacy Benchmarking Task to document as accurately as possible the FDIR processes and resources that were used by the Space Shuttle ground support equipment (GSE) during the Shuttle flight program. These results served as a comparison with results obtained from the new FDIR capability. The task team assessed Shuttle and EELV (Evolved Expendable Launch Vehicle) historical data for GSE-related launch delays to identify expected benefits and impact. This analysis included a study of complex fault isolation situations that required a lengthy troubleshooting process. Specifically, four elements of that system were considered: LH2 (liquid hydrogen), LO2 (liquid oxygen), hydraulic test, and ground special power.

  20. Software Testbed for Developing and Evaluating Integrated Autonomous Subsystems

    NASA Technical Reports Server (NTRS)

    Ong, James; Remolina, Emilio; Prompt, Axel; Robinson, Peter; Sweet, Adam; Nishikawa, David

    2015-01-01

    To implement fault tolerant autonomy in future space systems, it will be necessary to integrate planning, adaptive control, and state estimation subsystems. However, integrating these subsystems is difficult, time-consuming, and error-prone. This paper describes Intelliface/ADAPT, a software testbed that helps researchers develop and test alternative strategies for integrating planning, execution, and diagnosis subsystems more quickly and easily. The testbed's architecture, graphical data displays, and implementations of the integrated subsystems support easy plug and play of alternate components to support research and development in fault-tolerant control of autonomous vehicles and operations support systems. Intelliface/ADAPT controls NASA's Advanced Diagnostics and Prognostics Testbed (ADAPT), which comprises batteries, electrical loads (fans, pumps, and lights), relays, circuit breakers, invertors, and sensors. During plan execution, an experimentor can inject faults into the ADAPT testbed by tripping circuit breakers, changing fan speed settings, and closing valves to restrict fluid flow. The diagnostic subsystem, based on NASA's Hybrid Diagnosis Engine (HyDE), detects and isolates these faults to determine the new state of the plant, ADAPT. Intelliface/ADAPT then updates its model of the ADAPT system's resources and determines whether the current plan can be executed using the reduced resources. If not, the planning subsystem generates a new plan that reschedules tasks, reconfigures ADAPT, and reassigns the use of ADAPT resources as needed to work around the fault. The resource model, planning domain model, and planning goals are expressed using NASA's Action Notation Modeling Language (ANML). Parts of the ANML model are generated automatically, and other parts are constructed by hand using the Planning Model Integrated Development Environment, a visual Eclipse-based IDE that accelerates ANML model development. Because native ANML planners are currently under development and not yet sufficiently capable, the ANML model is translated into the New Domain Definition Language (NDDL) and sent to NASA's EUROPA planning system for plan generation. The adaptive controller executes the new plan, using augmented, hierarchical finite state machines to select and sequence actions based on the state of the ADAPT system. Real-time sensor data, commands, and plans are displayed in information-dense arrays of timelines and graphs that zoom and scroll in unison. A dynamic schematic display uses color to show the real-time fault state and utilization of the system components and resources. An execution manager coordinates the activities of the other subsystems. The subsystems are integrated using the Internet Communications Engine (ICE). an object-oriented toolkit for building distributed applications.

  1. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  2. Autonomous Dynamically Self-Organizing and Self-Healing Distributed Hardware Architecture - the eDNA Concept

    NASA Technical Reports Server (NTRS)

    Boesen, Michael Reibel; Madsen, Jan; Keymeulen, Didier

    2011-01-01

    This paper presents the current state of the autonomous dynamically self-organizing and self-healing electronic DNA (eDNA) hardware architecture (patent pending). In its current prototype state, the eDNA architecture is capable of responding to multiple injected faults by autonomously reconfiguring itself to accommodate the fault and keep the application running. This paper will also disclose advanced features currently available in the simulation model only. These features are future work and will soon be implemented in hardware. Finally we will describe step-by-step how an application is implemented on the eDNA architecture.

  3. A Decentralized Adaptive Approach to Fault Tolerant Flight Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor

    2000-01-01

    This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.

  4. Vehicle-Level Reasoning Systems: Integrating System-Wide Data to Estimate the Instantaneous Health State

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Mylaraswmay, Dinkar; Mah, Robert W.; Cooper, Eric G.

    2011-01-01

    At the aircraft level, a Vehicle-Level Reasoning System (VLRS) can be developed to provide aircraft with at least two significant capabilities: improvement of aircraft safety due to enhanced monitoring and reasoning about the aircrafts health state, and also potential cost savings by enabling Condition Based Maintenance (CBM). Along with the benefits of CBM, an important challenge facing aviation safety today is safeguarding against system and component failures and malfunctions. Faults can arise in one or more aircraft subsystem their effects in one system may propagate to other subsystems, and faults may interact.

  5. Studying onshore-offshore fault linkages and landslides in Icy Bay and Taan Fjord to assess geohazards in Southeast Alaska

    NASA Astrophysics Data System (ADS)

    McCall, N.; Walton, M. A. L.; Gulick, S. P. S.; Haeussler, P. J.; Reece, R.; Saustrup, S.

    2016-12-01

    In southeast Alaska, the plate boundary where the Yakutat microplate collides with North America has produced large historical earthquakes (i.e., the Mw 8+ 1899 sequence). Despite the seismic potential, the possible source fault systems for these earthquakes have not been imaged with modern methods in Icy Bay. The offshore Pamplona Zone and its eastward onshore extension, the Malaspina Fault, may have played a role in the September 1899 earthquakes. Onshore and offshore mapping indicates that these structures likely connect offshore in Icy Bay. In August 2016 we collected high-resolution (300-1200 Hz) seismic reflection and multibeam bathymetry data to search for evidence of such faults beneath Icy Bay and Taan Fiord. If the Malaspina Fault is found to link with the Pamplona Zone, a rupture could trigger a tsunami impacting the populated regions in southeast Alaska. More recently, on October 17th 2015, nearby Taan Fjord experienced one of the largest non-volcanic landslides recorded in North America. Approximately 200 million metric tons spilled into Taan Fjord creating a tsunami with waves reaching 150m onshore. Using the new data, we are capable of imaging landslide and tsunami deposits in high-resolution. These data give new constraints for onshore-offshore fault systems, giving us new insights into the earthquake and tsunami hazard in southeast Alaska.

  6. A possible transoceanic tsunami directed toward the U.S. west coast from the Semidi segment, Alaska convergent margin

    USGS Publications Warehouse

    von Huene, Roland E.; Miller, John J.; Dartnell, Peter

    2016-01-01

    The Semidi segment of the Alaska convergent margin appears capable of generating a giant tsunami like the one produced along the nearby Unimak segment in 1946. Reprocessed legacy seismic reflection data and a compilation of multibeam bathymetric surveys reveal structures that could generate such a tsunami. A 200 km long ridge or escarpment with crests >1 km high is the surface expression of an active out-of-sequence fault zone, recently referred to as a splay fault. Such faults are potentially tsunamigenic. This type of fault zone separates the relatively rigid rock of the margin framework from the anelastic accreted sediment prism. Seafloor relief of the ridge exceeds that of similar age accretionary prism ridges indicating preferential slip along the splay fault zone. The greater slip may derive from Quaternary subduction of the Patton Murray hot spot ridge that extends 200 km toward the east across the north Pacific. Estimates of tsunami repeat times from paleotsunami studies indicate that the Semidi segment could be near the end of its current inter-seismic cycle. GPS records from Chirikof Island at the shelf edge indicate 90% locking of plate interface faults. An earthquake in the shallow Semidi subduction zone could generate a tsunami that will inundate the US west coast more than the 1946 and 1964 earthquakes because the Semidi continental slope azimuth directs a tsunami southeastward.

  7. Software for determining the true displacement of faults

    NASA Astrophysics Data System (ADS)

    Nieto-Fuentes, R.; Nieto-Samaniego, Á. F.; Xu, S.-S.; Alaniz-Álvarez, S. A.

    2014-03-01

    One of the most important parameters of faults is the true (or net) displacement, which is measured by restoring two originally adjacent points, called “piercing points”, to their original positions. This measurement is not typically applicable because it is rare to observe piercing points in natural outcrops. Much more common is the measurement of the apparent displacement of a marker. Methods to calculate the true displacement of faults using descriptive geometry, trigonometry or vector algebra are common in the literature, and most of them solve a specific situation from a large amount of possible combinations of the fault parameters. True displacements are not routinely calculated because it is a tedious and tiring task, despite their importance and the relatively simple methodology. We believe that the solution is to develop software capable of performing this work. In a previous publication, our research group proposed a method to calculate the true displacement of faults by solving most combinations of fault parameters using simple trigonometric equations. The purpose of this contribution is to present a computer program for calculating the true displacement of faults. The input data are the dip of the fault; the pitch angles of the markers, slickenlines and observation lines; and the marker separation. To prevent the common difficulties involved in switching between operative systems, the software is developed using the Java programing language. The computer program could be used as a tool in education and will also be useful for the calculation of the true fault displacement in geological and engineering works. The application resolves the cases with known direction of net slip, which commonly is assumed parallel to the slickenlines. This assumption is not always valid and must be used with caution, because the slickenlines are formed during a step of the incremental displacement on the fault surface, whereas the net slip is related to the finite slip.

  8. Controls on the Seafloor Exposure of Detachment Fault Surfaces

    NASA Astrophysics Data System (ADS)

    Olive, J. A. L.; Parnell-Turner, R. E.; Escartin, J.; Smith, D. K.; Petersen, S.

    2017-12-01

    Morphological and seismological evidence suggests that asymmetric accretion involving oceanic detachment faulting takes place along 40% of the Northern Mid-Atlantic Ridge. However, seafloor exposures of corrugated slip surfaces -a telltale sign of this kind of faulting- remain scarce and spatially limited according to multibeam bathymetric surveys. This raises the question of whether geomorphic processes can hinder the exposure of pristine fault surfaces during detachment growth. We address this problem by analyzing ≤2-m resolution bathymetry data from four areas where corrugated surfaces emerge from the seafloor (13º20'N, 16º25'N, 16º36'N, and TAG). We identify two key processes capable of degrading or masking a corrugated large-offset fault surface. The first is gravitational mass wasting of steep (>25º) slopes, which is widespread in the breakaway region of most normal faults. The second is blanketing of the shallow-dipping termination area by a thin apron of hanging wall-derived debris. We model this process using critical taper theory, and infer low effective friction coefficients ( 0.15) on the emerging portion of detachment faults. A corollary to this result is that faults emerging from the seafloor with an angle <10º are more likely to blanket themselves under an apron of hanging wall debris. Optimal exposure of detachment surfaces therefore occurs when the fault emerges at slopes between 10° and 25º. We generalize these findings into a simple model for the progressive exhumation and flexural rotation of detachment footwalls, which accounts for the continued action of seafloor geomorphic processes. Our model suggests that many moderate-offset `blanketed' detachments may exist along slow mid-ocean ridges, but their corrugated surfaces are unlikely to be detected in shipboard multibeam bathymetry (e.g., TAG). Furthermore, many `irregular massifs' may correspond to the degraded footwalls of detachment faults.

  9. Automatic Channel Fault Detection on a Small Animal APD-Based Digital PET Scanner

    NASA Astrophysics Data System (ADS)

    Charest, Jonathan; Beaudoin, Jean-François; Cadorette, Jules; Lecomte, Roger; Brunet, Charles-Antoine; Fontaine, Réjean

    2014-10-01

    Avalanche photodiode (APD) based positron emission tomography (PET) scanners show enhanced imaging capabilities in terms of spatial resolution and contrast due to the one to one coupling and size of individual crystal-APD detectors. However, to ensure the maximal performance, these PET scanners require proper calibration by qualified scanner operators, which can become a cumbersome task because of the huge number of channels they are made of. An intelligent system (IS) intends to alleviate this workload by enabling a diagnosis of the observational errors of the scanner. The IS can be broken down into four hierarchical blocks: parameter extraction, channel fault detection, prioritization and diagnosis. One of the main activities of the IS consists in analyzing available channel data such as: normalization coincidence counts and single count rates, crystal identification classification data, energy histograms, APD bias and noise thresholds to establish the channel health status that will be used to detect channel faults. This paper focuses on the first two blocks of the IS: parameter extraction and channel fault detection. The purpose of the parameter extraction block is to process available data on individual channels into parameters that are subsequently used by the fault detection block to generate the channel health status. To ensure extensibility, the channel fault detection block is divided into indicators representing different aspects of PET scanner performance: sensitivity, timing, crystal identification and energy. Some experiments on a 8 cm axial length LabPET scanner located at the Sherbrooke Molecular Imaging Center demonstrated an erroneous channel fault detection rate of 10% (with a 95% confidence interval (CI) of [9, 11]) which is considered tolerable. Globally, the IS achieves a channel fault detection efficiency of 96% (CI: [95, 97]), which proves that many faults can be detected automatically. Increased fault detection efficiency would be advantageous but, the achieved results would already benefit scanner operators in their maintenance task.

  10. Vehicle Integrated Prognostic Reasoner (VIPR) 2010 Annual Final Report

    NASA Technical Reports Server (NTRS)

    Hadden, George D.; Mylaraswamy, Dinkar; Schimmel, Craig; Biswas, Gautam; Koutsoukos, Xenofon; Mack, Daniel

    2011-01-01

    Honeywell's Central Maintenance Computer Function (CMCF) and Aircraft Condition Monitoring Function (ACMF) represent the state-of-the art in integrated vehicle health management (IVHM). Underlying these technologies is a fault propagation modeling system that provides nose-to-tail coverage and root cause diagnostics. The Vehicle Integrated Prognostic Reasoner (VIPR) extends this technology to interpret evidence generated by advanced diagnostic and prognostic monitors provided by component suppliers to detect, isolate, and predict adverse events that affect flight safety. This report describes year one work that included defining the architecture and communication protocols and establishing the user requirements for such a system. Based on these and a set of ConOps scenarios, we designed and implemented a demonstration of communication pathways and associated three-tiered health management architecture. A series of scripted scenarios showed how VIPR would detect adverse events before they escalate as safety incidents through a combination of advanced reasoning and additional aircraft data collected from an aircraft condition monitoring system. Demonstrating VIPR capability for cases recorded in the ASIAS database and cross linking them with historical aircraft data is planned for year two.

  11. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  12. Flexible $$I_{Q}\\!\\!-\\!\\!V$$ Scheme of a DFIG for Rapid Voltage Regulation of a Wind Power Plant

    DOE PAGES

    Kim, Jinho; Muljadi, Eduard; Park, Jung -Wook; ...

    2017-04-28

    This paper proposes a flexible reactive current-to-voltage (I Q-V) scheme of a doubly-fed induction generator (DFIG) for the rapid voltage regulation of a wind power plant (WPP). In the proposed scheme, the WPP controller dispatches different voltage set points to the DFIGs depending on their rotor voltage margins. The DFIGs inject different reactive power with the flexible I Q-V schemes implemented in the rotor-side and grid-side converters. The I Q-V characteristic, which consists of the gain and width of a linear band and I Q capability, varies with time depending on the I Q capability of the converters and amore » voltage dip at the point of interconnection (POI). To increase the I Q capability during a fault, the active current is reduced in proportion to a voltage dip. If the I Q capability and/or the POI voltage dip are large, the I Q-V gain is set to be high, thereby providing rapid voltage regulation. To avoid an overvoltage after the fault clearance, a rapid I Q reduction scheme is implemented in the WPP and DFIG controllers. The performance of the proposed flexible scheme was verified under scenarios with various disturbances. In conclusion, the proposed scheme can help increase wind power penetration without jeopardizing voltage stability.« less

  13. The Integrated Safety-Critical Advanced Avionics Communication and Control (ISAACC) System Concept: Infrastructure for ISHM

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Briscoe, Jeri M.

    2005-01-01

    Integrated System Health Management (ISHM) architectures for spacecraft will include hard real-time, critical subsystems and soft real-time monitoring subsystems. Interaction between these subsystems will be necessary and an architecture supporting multiple criticality levels will be required. Demonstration hardware for the Integrated Safety-Critical Advanced Avionics Communication & Control (ISAACC) system has been developed at NASA Marshall Space Flight Center. It is a modular system using a commercially available time-triggered protocol, ?Tp/C, that supports hard real-time distributed control systems independent of the data transmission medium. The protocol is implemented in hardware and provides guaranteed low-latency messaging with inherent fault-tolerance and fault-containment. Interoperability between modules and systems of modules using the TTP/C is guaranteed through definition of messages and the precise message schedule implemented by the master-less Time Division Multiple Access (TDMA) communications protocol. "Plug-and-play" capability for sensors and actuators provides automatically configurable modules supporting sensor recalibration and control algorithm re-tuning without software modification. Modular components of controlled physical system(s) critical to control algorithm tuning, such as pumps or valve components in an engine, can be replaced or upgraded as "plug and play" components without modification to the ISAACC module hardware or software. ISAACC modules can communicate with other vehicle subsystems through time-triggered protocols or other communications protocols implemented over Ethernet, MIL-STD- 1553 and RS-485/422. Other communication bus physical layers and protocols can be included as required. In this way, the ISAACC modules can be part of a system-of-systems in a vehicle with multi-tier subsystems of varying criticality. The goal of the ISAACC architecture development is control and monitoring of safety critical systems of a manned spacecraft. These systems include spacecraft navigation and attitude control, propulsion, automated docking, vehicle health management and life support. ISAACC can integrate local critical subsystem health management with subsystems performing long term health monitoring. The ISAACC system and its relationship to ISHM will be presented.

  14. Software Tools to Support the Assessment of System Health

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2013-01-01

    This presentation provides an overview of three software tools that were developed by the NASA Glenn Research Center to support the assessment of system health: the Propulsion Diagnostic Method Evaluation Strategy (ProDIMES), the Systematic Sensor Selection Strategy (S4), and the Extended Testability Analysis (ETA) tool. Originally developed to support specific NASA projects in aeronautics and space, these software tools are currently available to U.S. citizens through the NASA Glenn Software Catalog. The ProDiMES software tool was developed to support a uniform comparison of propulsion gas path diagnostic methods. Methods published in the open literature are typically applied to dissimilar platforms with different levels of complexity. They often address different diagnostic problems and use inconsistent metrics for evaluating performance. As a result, it is difficult to perform a one ]to ]one comparison of the various diagnostic methods. ProDIMES solves this problem by serving as a theme problem to aid in propulsion gas path diagnostic technology development and evaluation. The overall goal is to provide a tool that will serve as an industry standard, and will truly facilitate the development and evaluation of significant Engine Health Management (EHM) capabilities. ProDiMES has been developed under a collaborative project of The Technical Cooperation Program (TTCP) based on feedback provided by individuals within the aircraft engine health management community. The S4 software tool provides a framework that supports the optimal selection of sensors for health management assessments. S4 is structured to accommodate user ]defined applications, diagnostic systems, search techniques, and system requirements/constraints. One or more sensor suites that maximize this performance while meeting other user ]defined system requirements that are presumed to exist. S4 provides a systematic approach for evaluating combinations of sensors to determine the set or sets of sensors that optimally meet the performance goals and the constraints. It identifies optimal sensor suite solutions by utilizing a merit (i.e., cost) function with one of several available optimization approaches. As part of its analysis, S4 can expose fault conditions that are difficult to diagnose due to an incomplete diagnostic philosophy and/or a lack of sensors. S4 was originally developed and applied to liquid rocket engines. It was subsequently used to study the optimized selection of sensors for a simulation ]based aircraft engine diagnostic system. The ETA Tool is a software ]based analysis tool that augments the testability analysis and reporting capabilities of a commercial ]off ]the ]shelf (COTS) package. An initial diagnostic assessment is performed by the COTS software using a user ]developed, qualitative, directed ]graph model of the system being analyzed. The ETA Tool accesses system design information captured within the model and the associated testability analysis output to create a series of six reports for various system engineering needs. These reports are highlighted in the presentation. The ETA Tool was developed by NASA to support the verification of fault management requirements early in the Launch Vehicle process. Due to their early development during the design process, the TEAMS ]based diagnostic model and the ETA Tool were able to positively influence the system design by highlighting gaps in failure detection, fault isolation, and failure recovery.

  15. Simulation of demand-response power management in smart city

    NASA Astrophysics Data System (ADS)

    Kadam, Kshitija

    Smart Grids manage energy efficiently through intelligent monitoring and control of all the components connected to the electrical grid. Advanced digital technology, combined with sensors and power electronics, can greatly improve transmission line efficiency. This thesis proposed a model of a deregulated grid which supplied power to diverse set of consumers and allowed them to participate in decision making process through two-way communication. The deregulated market encourages competition at the generation and distribution levels through communication with the central system operator. A software platform was developed and executed to manage the communication, as well for energy management of the overall system. It also demonstrated self-healing property of the system in case a fault occurs, resulting in an outage. The system not only recovered from the fault but managed to do so in a short time with no/minimum human involvement.

  16. Source character of microseismicity in the San Francisco Bay block, California, and implications for seismic hazard

    USGS Publications Warehouse

    Olson, J.A.; Zoback, M.L.

    1998-01-01

    We examine relocated seismicity within a 30-km-wide crustal block containing San Francisco Bay and bounded by two major right-lateral strike-slip fault systems, the Hayward and San Andreas faults, to determine seismicity distribution, source character, and possible relationship to proposed faults. Well-located low-level seismicity (Md ??? 3.0) has occurred persistently within this block throughout the recording interval (1969 to 1995), with the highest levels of activity occurring along or directly adjacent to (within ???5 km) the bounding faults and falling off toward the long axis of the bay. The total seismic moment release within the interior of the Bay block since 1969 is equivalent to one ML 3.8 earthquake, one to two orders of magnitude lower than activity along and within 5 km of the bounding faults. Focal depths of reliably located events within the Bay block are generally less than 13 km with most seismicity in the depth range of 7 to 12 km, similar to focal depths along both the adjacent portions of the San Andreas and Hayward faults. Focal mechanisms for Md 2 to 3 events within the Bay block mimic focal mechanisms along the adjacent San Andreas fault zone and in the East Bay, suggesting that Bay block is responding to a similar regional stress field. Two potential seismic source zones have been suggested within the Bay block. Our hypocentral depths and focal mechanisms suggest that a proposed subhorizontal detachment fault 15 to 18 km beneath the Bay is not seismically active. Several large-scale linear NW-trending aeromagnetic anomalies within the Bay block were previously suggested to represent large through-going subvertical fault zones. The two largest earthquakes (both Md 3.0) in the Bay block since 1969 occur near two of these large-scale linear aeromagnetic anomalies; both have subvertical nodal planes with right-lateral slip subparallel to the magnetic anomalies, suggesting that structures related to the anomalies may be capable of brittle failure. Geodetic, focal mechanism and seismicity data all suggest the Bay block is responding elastically to the same regional stresses affecting the bounding faults; however, continuous Holocene reflectors across the proposed fault zones suggest that if the magnetic anomalies represent basement fault zones, then these faults must have recurrence times one to several orders of magnitude longer than on the bounding faults.

  17. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  18. Fault detection and multiclassifier fusion for unmanned aerial vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Yan, Weizhong

    2001-03-01

    UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.

  19. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  20. A Review of Diagnostic Techniques for ISHM Applications

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Biswas, Gautam; Aaseng, Gordon; Narasimhan, Sriam; Pattipati, Krishna

    2005-01-01

    System diagnosis is an integral part of any Integrated System Health Management application. Diagnostic applications make use of system information from the design phase, such as safety and mission assurance analysis, failure modes and effects analysis, hazards analysis, functional models, fault propagation models, and testability analysis. In modern process control and equipment monitoring systems, topological and analytic , models of the nominal system, derived from design documents, are also employed for fault isolation and identification. Depending on the complexity of the monitored signals from the physical system, diagnostic applications may involve straightforward trending and feature extraction techniques to retrieve the parameters of importance from the sensor streams. They also may involve very complex analysis routines, such as signal processing, learning or classification methods to derive the parameters of importance to diagnosis. The process that is used to diagnose anomalous conditions from monitored system signals varies widely across the different approaches to system diagnosis. Rule-based expert systems, case-based reasoning systems, model-based reasoning systems, learning systems, and probabilistic reasoning systems are examples of the many diverse approaches ta diagnostic reasoning. Many engineering disciplines have specific approaches to modeling, monitoring and diagnosing anomalous conditions. Therefore, there is no "one-size-fits-all" approach to building diagnostic and health monitoring capabilities for a system. For instance, the conventional approaches to diagnosing failures in rotorcraft applications are very different from those used in communications systems. Further, online and offline automated diagnostic applications are integrated into an operations framework with flight crews, flight controllers and maintenance teams. While the emphasis of this paper is automation of health management functions, striking the correct balance between automated and human-performed tasks is a vital concern.

  1. New procedure for gear fault detection and diagnosis using instantaneous angular speed

    NASA Astrophysics Data System (ADS)

    Li, Bing; Zhang, Xining; Wu, Jili

    2017-02-01

    Besides the extreme complexity of gear dynamics, the fault diagnosis results in terms of vibration signal are sometimes easily misled and even distorted by the interference of transmission channel or other components like bearings, bars. Recently, the research field of Instantaneous Angular Speed (IAS) has attracted significant attentions due to its own advantages over conventional vibration analysis. On the basis of IAS signal's advantages, this paper presents a new feature extraction method by combining the Empirical Mode Decomposition (EMD) and Autocorrelation Local Cepstrum (ALC) for fault diagnosis of sophisticated multistage gearbox. Firstly, as a pre-processing step, signal reconstruction is employed to address the oversampled issue caused by the high resolution of the angular sensor and the test speed. Then the adaptive EMD is used to acquire a number of Intrinsic Mode Functions (IMFs). Nevertheless, not all the IMFs are needed for the further analysis since different IMFs have different sensitivities to fault. Hence, the cosine similarity metric is introduced to select the most sensitive IMF. Even though, the sensitive IMF is still insufficient for the gear fault diagnosis due to the weakness of the fault component related to the gear fault. Therefore, as the final step, ALC is used for the purpose of signal de-noising and feature extraction. The effectiveness and robustness of the new approach has been validated experimentally on the basis of two gear test rigs with gears under different working conditions. Diagnosis results show that the new approach is capable of effectively handling the gear fault diagnosis i.e., the highlighted quefrency and its rahmonics corresponding to the rotary period and its multiple are displayed clearly in the cepstrum record of the proposed method.

  2. Generic, scalable and decentralized fault detection for robot swarms.

    PubMed

    Tarapore, Danesh; Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system's capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation.

  3. High-Resolution Seismic Reflection Profiling Across the Black Hills Fault, Clark County, Nevada: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Zaragoza, S. A.; Snelson, C. M.; Jernsletten, J. A.; Saldana, S. C.; Hirsch, A.; McEwan, D.

    2005-12-01

    The Black Hills fault (BHF) is located in the central Basin and Range Province of western North America, a region that has undergone significant Cenozoic extension. The BHF is an east-dipping normal fault that forms the northwestern structural boundary of the Eldorado basin and lies ~20 km southeast of Las Vegas, Nevada. A recent trench study indicated that the fault offsets Holocene strata, and is capable of producing Mw 6.4-6.8 earthquakes. These estimates indicate a subsurface rupture length at least 10 km greater than the length of the scarp. This poses a significant hazard to structures such as the nearby Hoover Dam Bypass Bridge, which is being built to withstand a Mw 6.2-7.0 earthquake on local faults. If the BHF does continue in the subsurface, this structure, as well as nearby communities (Las Vegas, Boulder City, and Henderson), may not be as safe as previously expected. Previous attempts to image the fault with shallow seismics (hammer source) were inconclusive. However, gravity studies imply that the fault continues south of the scarp. Therefore, a new experiment utilizing high-resolution seismic reflection was performed to image subsurface geologic structures south of the scarp. At each shot point, a stack of four 30-160 Hz vibroseis sweeps of 15 s duration was recorded on a 60-channel system with 40 Hz geophones. This produced two 300 m reflection profiles, with a maximum depth of 500-600 m. A preliminary look at these data indicates the existence of two faults, potentially confirming that the BHF continues in the subsurface south of the scarp.

  4. Generic, scalable and decentralized fault detection for robot swarms

    PubMed Central

    Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system’s capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation. PMID:28806756

  5. Distributed adaptive diagnosis of sensor faults using structural response data

    NASA Astrophysics Data System (ADS)

    Dragos, Kosmas; Smarsly, Kay

    2016-10-01

    The reliability and consistency of wireless structural health monitoring (SHM) systems can be compromised by sensor faults, leading to miscalibrations, corrupted data, or even data loss. Several research approaches towards fault diagnosis, referred to as ‘analytical redundancy’, have been proposed that analyze the correlations between different sensor outputs. In wireless SHM, most analytical redundancy approaches require centralized data storage on a server for data analysis, while other approaches exploit the on-board computing capabilities of wireless sensor nodes, analyzing the raw sensor data directly on board. However, using raw sensor data poses an operational constraint due to the limited power resources of wireless sensor nodes. In this paper, a new distributed autonomous approach towards sensor fault diagnosis based on processed structural response data is presented. The inherent correlations among Fourier amplitudes of acceleration response data, at peaks corresponding to the eigenfrequencies of the structure, are used for diagnosis of abnormal sensor outputs at a given structural condition. Representing an entirely data-driven analytical redundancy approach that does not require any a priori knowledge of the monitored structure or of the SHM system, artificial neural networks (ANN) are embedded into the sensor nodes enabling cooperative fault diagnosis in a fully decentralized manner. The distributed analytical redundancy approach is implemented into a wireless SHM system and validated in laboratory experiments, demonstrating the ability of wireless sensor nodes to self-diagnose sensor faults accurately and efficiently with minimal data traffic. Besides enabling distributed autonomous fault diagnosis, the embedded ANNs are able to adapt to the actual condition of the structure, thus ensuring accurate and efficient fault diagnosis even in case of structural changes.

  6. Flight deck engine advisor

    NASA Technical Reports Server (NTRS)

    Shontz, W. D.; Records, R. M.; Antonelli, D. R.

    1992-01-01

    The focus of this project is on alerting pilots to impending events in such a way as to provide the additional time required for the crew to make critical decisions concerning non-normal operations. The project addresses pilots' need for support in diagnosis and trend monitoring of faults as they affect decisions that must be made within the context of the current flight. Monitoring and diagnostic modules developed under the NASA Faultfinder program were restructured and enhanced using input data from an engine model and real engine fault data. Fault scenarios were prepared to support knowledge base development activities on the MONITAUR and DRAPhyS modules of Faultfinder. An analysis of the information requirements for fault management was included in each scenario. A conceptual framework was developed for systematic evaluation of the impact of context variables on pilot action alternatives as a function of event/fault combinations.

  7. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  8. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    NASA Astrophysics Data System (ADS)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  9. Automated fault-management in a simulated spaceflight micro-world

    NASA Technical Reports Server (NTRS)

    Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja

    2002-01-01

    BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.

  10. Using focal mechanism solutions to correlate earthquakes with faults in the Lake Tahoe-Truckee area, California and Nevada, and to help design LiDAR surveys for active-fault reconnaissance

    NASA Astrophysics Data System (ADS)

    Cronin, V. S.; Lindsay, R. D.

    2011-12-01

    Geomorphic analysis of hillshade images produced from aerial LiDAR data has been successful in identifying youthful fault traces. For example, the recently discovered Polaris fault just northwest of Lake Tahoe, California/Nevada, was recognized using LiDAR data that had been acquired by local government to assist land-use planning. Subsequent trenching by consultants under contract to the US Army Corps of Engineers has demonstrated Holocene displacement. The Polaris fault is inferred to be capable of generating a magnitude 6.4-6.9 earthquake, based on its apparent length and offset characteristics (Hunter and others, 2011, BSSA 101[3], 1162-1181). Dingler and others (2009, GSA Bull 121[7/8], 1089-1107) describe paleoseismic or geomorphic evidence for late Neogene displacement along other faults in the area, including the West Tahoe-Dollar Point, Stateline-North Tahoe, and Incline Village faults. We have used the seismo-lineament analysis method (SLAM; Cronin and others, 2008, Env Eng Geol 14[3], 199-219) to establish a tentative spatial correlation between each of the previously mentioned faults, as well as with segments of the Dog Valley fault system, and one or more earthquake(s). The ~18 earthquakes we have tentatively correlated with faults in the Tahoe-Truckee area occurred between 1966 and 2008, with magnitudes between 3 and ~6. Given the focal mechanism solution for a well-located shallow-focus earthquake, the nodal planes can be projected to Earth's surface as represented by a DEM, plus-or-minus the vertical and horizontal uncertainty in the focal location, to yield two seismo-lineament swaths. The trace of the fault that generated the earthquake is likely to be found within one of the two swaths [1] if the fault surface is emergent, and [2] if the fault surface is approximately planar in the vicinity of the focus. Seismo-lineaments from several of the earthquakes studied overlap in a manner that suggests they are associated with the same fault. The surface trace of both the Polaris fault and the Dog Valley fault system are within composite swaths defined by overlapping seismo-lineaments. Composite seismo-lineaments indicate that multiple historic earthquakes might be associated with a fault. This apparently successful correlation of earthquakes with faults in an area where geologic mapping is good suggests another use for SLAM in areas where fault mapping is incomplete, inadequate or made particularly difficult because of vegetative cover. If no previously mapped fault exists along a composite swath generated using well constrained focal mechanism solutions, the swath might be used to guide the design of a LiDAR survey in support of reconnaissance for the causative fault. The acquisition and geomorphic analysis of LiDAR data along a compound seismo-lineament swath might reveal geomorphic evidence of a previously unrecognized fault trace that is worthy of additional field study.

  11. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles

    PubMed Central

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-01-01

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists. PMID:27548183

  12. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles.

    PubMed

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-08-19

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.

  13. Map and database of Quaternary faults in Venezuela and its offshore regions

    USGS Publications Warehouse

    Audemard, F.A.; Machette, M.N.; Cox, J.W.; Dart, R.L.; Haller, K.M.

    2000-01-01

    As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.The project is sponsored by the International Lithosphere Program and funded by the USGS’s National Earthquake Hazards Reduction Program. The primary elements of the project are general supervision and interpretation of geologic/tectonic information, data compilation and entry for fault catalog, database design and management, and digitization and manipulation of data in †ARCINFO. For the compilation of data, we engaged experts in Quaternary faulting, neotectonics, paleoseismology, and seismology.

  14. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  15. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    NASA Astrophysics Data System (ADS)

    Woo, G.

    2005-12-01

    Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a high deductible is in force, this requires estimation of the epistemic uncertainty on fault geometry and activity. Transport infrastructure insurance is of practical interest in seismic countries. On the North Anatolian Fault in Turkey, there is uncertainty over an unbroken segment between the eastern end of the Dazce Fault and Bolu. This may have ruptured during the 1944 earthquake. Existing hazard maps may simply use a question mark to flag uncertainty. However, a far more informative type of hazard map might express spatial variations in the confidence level associated with a fault map. Through such visual guidance, an insurance risk analyst would be better placed to price earthquake cover, allowing for epistemic uncertainty.

  16. Research on prognostics and health management of underground pipeline

    NASA Astrophysics Data System (ADS)

    Zhang, Guangdi; Yang, Meng; Yang, Fan; Ni, Na

    2018-04-01

    With the development of the city, the construction of the underground pipeline is more and more complex, which has relation to the safety and normal operation of the city, known as "the lifeline of the city". First of all, this paper introduces the principle of PHM (Prognostics and Health Management) technology, then proposed for fault diagnosis, prognostics and health management in view of underground pipeline, make a diagnosis and prognostics for the faults appearing in the operation of the underground pipeline, and then make a health assessment of the whole underground pipe network in order to ensure the operation of the pipeline safely. Finally, summarize and prospect the future research direction.

  17. Networking of Icelandic Earth Infrastructures - Natural laboratories and Volcano Supersites

    NASA Astrophysics Data System (ADS)

    Vogfjörd, K. S.; Sigmundsson, F.; Hjaltadóttir, S.; Björnsson, H.; Arason, Ø.; Hreinsdóttir, S.; Kjartansson, E.; Sigbjörnsson, R.; Halldórsson, B.; Valsson, G.

    2012-04-01

    The back-bone of Icelandic geoscientific research infrastructure is the country's permanent monitoring networks, which have been built up to monitor seismic and volcanic hazard and deformation of the Earth's surface. The networks are mainly focussed around the plate boundary in Iceland, particularly the two seismic zones, where earthquakes of up to M7.3 have occurred in centuries past, and the rift zones with over 30 active volcanic systems where a large number of powerful eruptions have occurred, including highly explosive ones. The main observational systems are seismic, strong motion, GPS and bore-hole strain networks, with the addition of more recent systems like hydrological stations, permanent and portable radars, ash-particle counters and gas monitoring systems. Most of the networks are owned by a handful of Icelandic institutions, but some are operated in collaboration with international institutions and universities. The networks have been in operation for years to decades and have recorded large volumes of research quality data. The main Icelandic infrastructures will be networked in the European Plate Observing System (EPOS). The plate boundary in the South Iceland seismic zone (SISZ) with its book-shelf tectonics and repeating major earthquakes sequences of up to M7 events, has the potential to be defined a natural laboratory within EPOS. Work towards integrating multidisciplinary data and technologies from the monitoring infrastructures in the SISZ with other fault regions has started in the FP7 project NERA, under the heading of Networking of Near-Fault Observatories. The purpose is to make research-quality data from near-fault observatories available to the research community, as well as to promote transfer of knowledge and techical know-how between the different observatories of Europe, in order to create a network of fault-monitoring networks. The seismic and strong-motion systems in the SISZ are also, to some degree, being networked nationally to strengthen their early warning capabilities. In response to the far-reaching dispersion of ash from the 2010 Eyjafjallajökull eruption and subsequent disturbance to European air-space, the instrumentation of the Icelandic volcano observatory was greatly improved in number and capability to better monitor sub-surface volcanic processes as well as the air-borne products of eruptions. This infrastructure will also be networked with other European volcano observatories in EPOS. Finally the Icelandic EPOS team, together with other European collaborators, has responded to an FP7 call for the establishment of an Icelandic volcano supersite, where land- and space-based data will be made available to researchers and hazard managers, in line with the implementation plan of the GEO. The focus of the Icelandic volcano supersite are the active volcanoes in Iceland's Eastern volcanic zone.

  18. Hybrid Modeling Improves Health and Performance Monitoring

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  19. The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.

    NASA Astrophysics Data System (ADS)

    Thompson, T. B.; Meade, B. J.

    2017-12-01

    While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.

  20. High Speed Operation and Testing of a Fault Tolerant Magnetic Bearing

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Clark, Daniel

    2004-01-01

    Research activities undertaken to upgrade the fault-tolerant facility, continue testing high-speed fault-tolerant operation, and assist in the commission of the high temperature (1000 degrees F) thrust magnetic bearing as described. The fault-tolerant magnetic bearing test facility was upgraded to operate to 40,000 RPM. The necessary upgrades included new state-of-the art position sensors with high frequency modulation and new power edge filtering of amplifier outputs. A comparison study of the new sensors and the previous system was done as well as a noise assessment of the sensor-to-controller signals. Also a comparison study of power edge filtering for amplifier-to-actuator signals was done; this information is valuable for all position sensing and motor actuation applications. After these facility upgrades were completed, the rig is believed to have capabilities for 40,000 RPM operation, though this has yet to be demonstrated. Other upgrades included verification and upgrading of safety shielding, and upgrading control algorithms. The rig will now also be used to demonstrate motoring capabilities and control algorithms are in the process of being created. Recently an extreme temperature thrust magnetic bearing was designed from the ground up. The thrust bearing was designed to fit within the existing high temperature facility. The retrofit began near the end of the summer, 04, and continues currently. Contract staff authored a NASA-TM entitled "An Overview of Magnetic Bearing Technology for Gas Turbine Engines", containing a compilation of bearing data as it pertains to operation in the regime of the gas turbine engine and a presentation of how magnetic bearings can become a viable candidate for use in future engine technology.

  1. Real World Experience With Ion Implant Fault Detection at Freescale Semiconductor

    NASA Astrophysics Data System (ADS)

    Sing, David C.; Breeden, Terry; Fakhreddine, Hassan; Gladwin, Steven; Locke, Jason; McHugh, Jim; Rendon, Michael

    2006-11-01

    The Freescale automatic fault detection and classification (FDC) system has logged data from over 3.5 million implants in the past two years. The Freescale FDC system is a low cost system which collects summary implant statistics at the conclusion of each implant run. The data is collected by either downloading implant data log files from the implant tool workstation, or by exporting summary implant statistics through the tool's automation interface. Compared to the traditional FDC systems which gather trace data from sensors on the tool as the implant proceeds, the Freescale FDC system cannot prevent scrap when a fault initially occurs, since the data is collected after the implant concludes. However, the system can prevent catastrophic scrap events due to faults which are not detected for days or weeks, leading to the loss of hundreds or thousands of wafers. At the Freescale ATMC facility, the practical applications of the FD system fall into two categories: PM trigger rules which monitor tool signals such as ion gauges and charge control signals, and scrap prevention rules which are designed to detect specific failure modes that have been correlated to yield loss and scrap. PM trigger rules are designed to detect shifts in tool signals which indicate normal aging of tool systems. For example, charging parameters gradually shift as flood gun assemblies age, and when charge control rules start to fail a flood gun PM is performed. Scrap prevention rules are deployed to detect events such as particle bursts and excessive beam noise, events which have been correlated to yield loss. The FDC system does have tool log-down capability, and scrap prevention rules often use this capability to automatically log the tool into a maintenance state while simultaneously paging the sustaining technician for data review and disposition of the affected product.

  2. A Thermal Expert System (TEXSYS) development overview - AI-based control of a Space Station prototype thermal bus

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Hack, E. C.

    1990-01-01

    A knowledge-based control system for real-time control and fault detection, isolation and recovery (FDIR) of a prototype two-phase Space Station Freedom external thermal control system (TCS) is discussed in this paper. The Thermal Expert System (TEXSYS) has been demonstrated in recent tests to be capable of both fault anticipation and detection and real-time control of the thermal bus. Performance requirements were achieved by using a symbolic control approach, layering model-based expert system software on a conventional numerical data acquisition and control system. The model-based capabilities of TEXSYS were shown to be advantageous during software development and testing. One representative example is given from on-line TCS tests of TEXSYS. The integration and testing of TEXSYS with a live TCS testbed provides some insight on the use of formal software design, development and documentation methodologies to qualify knowledge-based systems for on-line or flight applications.

  3. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  4. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  5. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  6. Triggering of destructive earthquakes in El Salvador

    NASA Astrophysics Data System (ADS)

    Martínez-Díaz, José J.; Álvarez-Gómez, José A.; Benito, Belén; Hernández, Douglas

    2004-01-01

    We investigate the existence of a mechanism of static stress triggering driven by the interaction of normal faults in the Middle American subduction zone and strike-slip faults in the El Salvador volcanic arc. The local geology points to a large strike-slip fault zone, the El Salvador fault zone, as the source of several destructive earthquakes in El Salvador along the volcanic arc. We modeled the Coulomb failure stress (CFS) change produced by the June 1982 and January 2001 subduction events on planes parallel to the El Salvador fault zone. The results have broad implications for future risk management in the region, as they suggest a causative relationship between the position of the normal-slip events in the subduction zone and the strike-slip events in the volcanic arc. After the February 2001 event, an important area of the El Salvador fault zone was loaded with a positive change in Coulomb failure stress (>0.15 MPa). This scenario must be considered in the seismic hazard assessment studies that will be carried out in this area.

  7. Fault activation and induced seismicity in geological carbon storage – Lessons learned from recent modeling studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frederic

    In the light of current concerns related to induced seismicity associated with geological carbon sequestration (GCS), this paper summarizes lessons learned from recent modeling studies on fault activation, induced seismicity, and potential for leakage associated with deep underground carbon dioxide (CO 2) injection. Model simulations demonstrate that seismic events large enough to be felt by humans require brittle fault properties and continuous fault permeability allowing pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicitymore » and also effectively impede upward CO 2 leakage. A number of simulations show that even a sizable seismic event that could be felt may not be capable of opening a new flow path across the entire thickness of an overlying caprock and it is very unlikely to cross a system of multiple overlying caprock units. Site-specific model simulations of the In Salah CO 2 storage demonstration site showed that deep fractured zone responses and associated microseismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. We conclude by emphasizing the importance of site investigation to characterize rock properties and if at all possible to avoid brittle rock such as proximity of crystalline basement or sites in hard and brittle sedimentary sequences that are more prone to injection-induced seismicity and permanent damage.« less

  8. Fault activation and induced seismicity in geological carbon storage – Lessons learned from recent modeling studies

    DOE PAGES

    Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frederic; ...

    2016-09-20

    In the light of current concerns related to induced seismicity associated with geological carbon sequestration (GCS), this paper summarizes lessons learned from recent modeling studies on fault activation, induced seismicity, and potential for leakage associated with deep underground carbon dioxide (CO 2) injection. Model simulations demonstrate that seismic events large enough to be felt by humans require brittle fault properties and continuous fault permeability allowing pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicitymore » and also effectively impede upward CO 2 leakage. A number of simulations show that even a sizable seismic event that could be felt may not be capable of opening a new flow path across the entire thickness of an overlying caprock and it is very unlikely to cross a system of multiple overlying caprock units. Site-specific model simulations of the In Salah CO 2 storage demonstration site showed that deep fractured zone responses and associated microseismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. We conclude by emphasizing the importance of site investigation to characterize rock properties and if at all possible to avoid brittle rock such as proximity of crystalline basement or sites in hard and brittle sedimentary sequences that are more prone to injection-induced seismicity and permanent damage.« less

  9. A System for Fault Management for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.

  10. High resolution shallow imaging of the mega-splay fault in the central Nankai Trough off Kumano

    NASA Astrophysics Data System (ADS)

    Ashi, J.

    2012-12-01

    Steep slopes are continuously developed at water depths between 2200 to 2800 m at the Nankai accretionary prism off Kumano. These slopes are interpreted to be surface expressions caused by the megasplay fault on seismic reflection profiles. The fault plane has been drilled at multiple depths below seafloor by IODP NanTroSEIZE project. Mud breccias only recognized at the hanging wall of the fault (Site C0004) by Xray CT scanner are interpreted be formed by strong ground shaking and the age of the shallowest event of mud breccia layers suggests deformation in 1944 Tonankai earthquake (Sakaguchi et al., 2011). Detailed structures around the fault have been examined by seismic reflection profiles including 3D experiments. Although the fault plane deeper than 100 m is well imaged, the structure shallower than 100 m is characterized by obscure sediment veneer suggesting no recent fault activity. Investigation of shallow deformation structures is significant for understanding of recent tectonic activity. Therefore, we carried out deep towed subbottom profile survey by ROV NSS (Navigable Sampling System) during Hakuho-maru KH-11-9 cruise. We introduced a chirp subbottom profiling system of EdgeTech DW-106 for high resolution mapping of shallow structures. ROV NSS also has capability to take a long core with a pinpoint accuracy. The subbottom profiler crossing the megasplay fault near Site C0004 exhibits a landward dipping reflector suggesting the fault plane. The shallowest depth of the reflector is about 10 m below seafloor and the strata above it shows reflectors parallel to the seafloor without any topographic undulation. The fault must have displaced the shallow formation because intense deformation indicated by mud breccia was restricted to near fault zone. Slumping or sliding probably modified the shallow formation after the faulting. The shallow deformations near the megasplay fault were well imaged at the fault scarp 20 km southwest of Site C0004. Although the fault plane itself is not recognized, displacements of sedimentary layers are observed along the fault up to 30 meter below the seafloor. Landward dip of the fault is estimated to be 30 degrees. Displacements of strata are about 3 m near the surface and about 5 m at 7 m below the seafloor suggesting accumulation of fault displacement. The structure more than 30 m below the seafloor is obscure due to decrease of acoustic signal. Active cold seep is expected in this site by high heat flow (Yamano et al., 2012) and many trails of Calyptogena detected by seafloor observations. These results are consistent with the shallow structures reveled by our subbottom profiling survey. References Sakaguchi, A. et al., Geology 39, 919-922, 2011. Yamano, M. et al., JpGU Meeting abstract, SSS38-P23, 2012

  11. NASA Space Flight Vehicle Fault Isolation Challenges

    NASA Technical Reports Server (NTRS)

    Bramon, Christopher; Inman, Sharon K.; Neeley, James R.; Jones, James V.; Tuttle, Loraine

    2016-01-01

    The Space Launch System (SLS) is the new NASA heavy lift launch vehicle and is scheduled for its first mission in 2017. The goal of the first mission, which will be uncrewed, is to demonstrate the integrated system performance of the SLS rocket and spacecraft before a crewed flight in 2021. SLS has many of the same logistics challenges as any other large scale program. Common logistics concerns for SLS include integration of discrete programs geographically separated, multiple prime contractors with distinct and different goals, schedule pressures and funding constraints. However, SLS also faces unique challenges. The new program is a confluence of new hardware and heritage, with heritage hardware constituting seventy-five percent of the program. This unique approach to design makes logistics concerns such as testability of the integrated flight vehicle especially problematic. The cost of fully automated diagnostics can be completely justified for a large fleet, but not so for a single flight vehicle. Fault detection is mandatory to assure the vehicle is capable of a safe launch, but fault isolation is another issue. SLS has considered various methods for fault isolation which can provide a reasonable balance between adequacy, timeliness and cost. This paper will address the analyses and decisions the NASA Logistics engineers are making to mitigate risk while providing a reasonable testability solution for fault isolation.

  12. Holocene activity and seismogenic capability of intraplate thrusts: Insights from the Pampean Ranges, Argentina

    NASA Astrophysics Data System (ADS)

    Costa, Carlos H.; Owen, Lewis A.; Ricci, Walter R.; Johnson, William J.; Halperin, Alan D.

    2018-07-01

    Trench excavations across the El Molino fault in the southeastern Pampean Ranges of central-western Argentina have revealed a deformation zone composed of opposite-verging thrusts that deform a succession of Holocene sediments. The west-verging thrusts place Precambrian basement over Holocene proximal scarp-derived deposits, whereas the east-verging thrusts form an east-directed fault-propagation fold that deforms colluvium, fluvial and aeolian deposits. Ages for exposed fault-related deposits range from 7.1 ± 0.4 to 0.3 ka. Evidence of surface deformation suggests multiple rupture events with related scarp-derived deposits and a minimum of three surface ruptures younger than 7.1 ± 0.4 ka, the last rupture event being younger than 1 ka. Shortening rates of 0.7 ± 0.2 mm/a are near one order of magnitude higher than those estimated for the faults bounding neighboring crustal blocks and are considered high for this intraplate setting. These ground-rupturing crustal earthquakes are estimated to be of magnitude Mw ≥ 7.0, a significant discrepancy with the magnitudes Mw < 6.5 recorded in the seismic catalog of this region at present with low to moderate seismicity. Results highlight the relevance of identifying primary surface ruptures as well as the seismogenic potential of thrust faults in seemingly stable continental interiors.

  13. Integrated Systems Health Management for Sustainable Habitats (Using Sustainability Base as a Testbed)

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.

    2017-01-01

    Habitation systems provide a safe place for astronauts to live and work in space and on planetary surfaces. They enable crews to live and work safely in deep space, and include integrated life support systems, radiation protection, fire safety, and systems to reduce logistics and the need for resupply missions. Innovative health management technologies are needed in order to increase the safety and mission-effectiveness for future space habitats on other planets, asteroids, or lunar surfaces. For example, off-nominal or failure conditions occurring in safety-critical life support systems may need to be addressed quickly by the habitat crew without extensive technical support from Earth due to communication delays. If the crew in the habitat must manage, plan and operate much of the mission themselves, operations support must be migrated from Earth to the habitat. Enabling monitoring, tracking, and management capabilities on-board the habitat and related EVA platforms for a small crew to use will require significant automation and decision support software.Traditional caution and warning systems are typically triggered by out-of-bounds sensor values, but can be enhanced by including machine learning and data mining techniques. These methods aim to reveal latent, unknown conditions while still retaining and improving the ability to provide highly accurate alerts for known issues. A few of these techniques will briefly described, along with performance targets for known faults and failures. Specific system health management capabilities required for habitat system elements (environmental control and life support systems, etc.) may include relevant subsystems such as water recycling systems, photovoltaic systems, electrical power systems, and environmental monitoring systems. Sustainability Base, the agency's flagship LEED-platinum certified green building acts as a living laboratory for testing advanced information and sustainable technologies that provides an opportunity to test novel machine learning and controls capabilities. In this talk, key features of Sustainability Base that make it relevant to deep space habitat technology and its use of these kinds of subsystems previously listed will be presented. The fact that all such systems require less power to support human occupancy can be used as a focal point to serve as a testbed for deep space habitats that will need to operate within finite energy budgets.

  14. Research Supporting Satellite Communications Technology

    NASA Technical Reports Server (NTRS)

    Horan Stephen; Lyman, Raphael

    2005-01-01

    This report describes the second year of research effort under the grant Research Supporting Satellite Communications Technology. The research program consists of two major projects: Fault Tolerant Link Establishment and the design of an Auto-Configurable Receiver. The Fault Tolerant Link Establishment protocol is being developed to assist the designers of satellite clusters to manage the inter-satellite communications. During this second year, the basic protocol design was validated with an extensive testing program. After this testing was completed, a channel error model was added to the protocol to permit the effects of channel errors to be measured. This error generation was used to test the effects of channel errors on Heartbeat and Token message passing. The C-language source code for the protocol modules was delivered to Goddard Space Flight Center for integration with the GSFC testbed. The need for a receiver autoconfiguration capability arises when a satellite-to-ground transmission is interrupted due to an unexpected event, the satellite transponder may reset to an unknown state and begin transmitting in a new mode. During Year 2, we completed testing of these algorithms when noise-induced bit errors were introduced. We also developed and tested an algorithm for estimating the data rate, assuming an NRZ-formatted signal corrupted with additive white Gaussian noise, and we took initial steps in integrating both algorithms into the SDR test bed at GSFC.

  15. A Conceptual Design for a Reliable Optical Bus (ROBUS)

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Malekpour, Mahyar; Torres, Wilfredo

    2002-01-01

    The Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) is a new family of fault-tolerant architectures under development at NASA Langley Research Center (LaRC). The SPIDER is a general-purpose computational platform suitable for use in ultra-reliable embedded control applications. The design scales from a small configuration supporting a single aircraft function to a large distributed configuration capable of supporting several functions simultaneously. SPIDER consists of a collection of simplex processing elements communicating via a Reliable Optical Bus (ROBUS). The ROBUS is an ultra-reliable, time-division multiple access broadcast bus with strictly enforced write access (no babbling idiots) providing basic fault-tolerant services using formally verified fault-tolerance protocols including Interactive Consistency (Byzantine Agreement), Internal Clock Synchronization, and Distributed Diagnosis. The conceptual design of the ROBUS is presented in this paper including requirements, topology, protocols, and the block-level design. Verification activities, including the use of formal methods, are also discussed.

  16. FERMILAB CRYOMODULE TEST STAND RF INTERLOCK SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersen, Troy; Diamond, J. S.; McDowell, D.

    2016-10-12

    An interlock system has been designed for the Fermilab Cryo-module Test Stand (CMTS), a test bed for the cryo- modules to be used in the upcoming Linac Coherent Light Source 2 (LCLS-II) project at SLAC. The interlock system features 8 independent subsystems, one per superconducting RF cavity and solid state amplifier (SSA) pair. Each system monitors several devices to detect fault conditions such as arcing in the waveguides or quenching of the SRF system. Additionally each system can detect fault conditions by monitoring the RF power seen at the cavity coupler through a directional coupler. In the event of amore » fault condition, each system is capable of removing RF signal to the amplifier (via a fast RF switch) as well as turning off the SSA. Additionally, each input signal is available for re- mote viewing and recording via a Fermilab designed digitizer board and MVME 5500 processor.« less

  17. Predicted performance of an integrated modular engine system

    NASA Technical Reports Server (NTRS)

    Binder, Michael; Felder, James L.

    1993-01-01

    Space vehicle propulsion systems are traditionally comprised of a cluster of discrete engines, each with its own set of turbopumps, valves, and a thrust chamber. The Integrated Modular Engine (IME) concept proposes a vehicle propulsion system comprised of multiple turbopumps, valves, and thrust chambers which are all interconnected. The IME concept has potential advantages in fault-tolerance, weight, and operational efficiency compared with the traditional clustered engine configuration. The purpose of this study is to examine the steady-state performance of an IME system with various components removed to simulate fault conditions. An IME configuration for a hydrogen/oxygen expander cycle propulsion system with four sets of turbopumps and eight thrust chambers has been modeled using the Rocket Engine Transient Simulator (ROCETS) program. The nominal steady-state performance is simulated, as well as turbopump thrust chamber and duct failures. The impact of component failures on system performance is discussed in the context of the system's fault tolerant capabilities.

  18. Fault Diagnosis approach based on a model-based reasoner and a functional designer for a wind turbine. An approach towards self-maintenance

    NASA Astrophysics Data System (ADS)

    Echavarria, E.; Tomiyama, T.; van Bussel, G. J. W.

    2007-07-01

    The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance.

  19. Steam generator tubes integrity: In-service-inspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comby, R.J.

    1997-02-01

    The author`s approach to tube integrity is in terms of looking for flaws in tubes. The basis for this approach is that no simple rules can be fixed to adopt a universal inspection methodology because of various concepts related to experience, leak acceptance, leak before break approach, etc. Flaw specific management is probably the most reliable approach as a compromise between safety, availability and economic issues. In that case, NDE capabilities have to be in accordance with information required by structural integrity demonstration. The author discusses the types of probes which can be used to search for flaws in additionmore » to the types of flaws which are being sought, with examples of specific analysis experiences. The author also discusses the issue of a reporting level as it relates to avoiding false calls, classifying faults, and allowing for automation in analysis.« less

  20. Emergent Adaptive Noise Reduction from Communal Cooperation of Sensor Grid

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Jones, Michael G.; Nark, Douglas M.; Lodding, Kenneth N.

    2010-01-01

    In the last decade, the realization of small, inexpensive, and powerful devices with sensors, computers, and wireless communication has promised the development of massive sized sensor networks with dense deployments over large areas capable of high fidelity situational assessments. However, most management models have been based on centralized control and research has concentrated on methods for passing data from sensor devices to the central controller. Most implementations have been small but, as it is not scalable, this methodology is insufficient for massive deployments. Here, a specific application of a large sensor network for adaptive noise reduction demonstrates a new paradigm where communities of sensor/computer devices assess local conditions and make local decisions from which emerges a global behaviour. This approach obviates many of the problems of centralized control as it is not prone to single point of failure and is more scalable, efficient, robust, and fault tolerant

Top