Fault management for data systems
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann
1993-01-01
Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.
Automated Generation of Fault Management Artifacts from a Simple System Model
NASA Technical Reports Server (NTRS)
Kennedy, Andrew K.; Day, John C.
2013-01-01
Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.
Formal Validation of Fault Management Design Solutions
NASA Technical Reports Server (NTRS)
Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John
2013-01-01
The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.
Flight elements: Fault detection and fault management
NASA Technical Reports Server (NTRS)
Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.
1990-01-01
Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.
NASA Technical Reports Server (NTRS)
Rogers, William H.; Schutte, Paul C.
1993-01-01
Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.
On-board fault management for autonomous spacecraft
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne
1991-01-01
The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.
Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications
NASA Astrophysics Data System (ADS)
Nasir, Ali
Spacecraft operate in a harsh environment, are costly to launch, and experience unavoidable communication delay and bandwidth constraints. These factors motivate the need for effective onboard mission and fault management. This dissertation presents an integrated framework to optimize science goal achievement while identifying and managing encountered faults. Goal-related tasks are defined by pointing the spacecraft instrumentation toward distant targets of scientific interest. The relative value of science data collection is traded with risk of failures to determine an optimal policy for mission execution. Our major innovation in fault detection and reconfiguration is to incorporate fault information obtained from two types of spacecraft models: one based on the dynamics of the spacecraft and the second based on the internal composition of the spacecraft. For fault reconfiguration, we consider possible changes in both dynamics-based control law configuration and the composition-based switching configuration. We formulate our problem as a stochastic sequential decision problem or Markov Decision Process (MDP). To avoid the computational complexity involved in a fully-integrated MDP, we decompose our problem into multiple MDPs. These MDPs include planning MDPs for different fault scenarios, a fault detection MDP based on a logic-based model of spacecraft component and system functionality, an MDP for resolving conflicts between fault information from the logic-based model and the dynamics-based spacecraft models" and the reconfiguration MDP that generates a policy optimized over the relative importance of the mission objectives versus spacecraft safety. Approximate Dynamic Programming (ADP) methods for the decomposition of the planning and fault detection MDPs are applied. To show the performance of the MDP-based frameworks and ADP methods, a suite of spacecraft attitude planning case studies are described. These case studies are used to analyze the content and behavior of computed policies in response to the changes in design parameters. A primary case study is built from the Far Ultraviolet Spectroscopic Explorer (FUSE) mission for which component models and their probabilities of failure are based on realistic mission data. A comparison of our approach with an alternative framework for spacecraft task planning and fault management is presented in the context of the FUSE mission.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John C.; Donahue, Kenneth; Ingham, Michel; Kadesch, Alex; Kennedy, Andrew K.; Post, Ethan
2012-01-01
Specification and development of fault management functionality in systems is performed in an ad hoc way - more of an art than a science. Improvements to system reliability, availability, safety and resilience will be limited without infusion of additional formality into the practice of fault management. Key to the formalization of fault management is a precise representation of off-nominal behavior. Using the upcoming Soil Moisture Active-Passive (SMAP) mission for source material, we have modeled the off-nominal behavior of the SMAP system during its initial spin-up activity, using the System Modeling Language (SysML). In the course of developing these models, we have developed generic patterns for capturing off-nominal behavior in SysML. We show how these patterns provide useful ways of reasoning about the system (e.g., checking for completeness and effectiveness) and allow the automatic generation of typical artifacts (e.g., success trees and FMECAs) used in system analyses.
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
Fault management for the Space Station Freedom control center
NASA Technical Reports Server (NTRS)
Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet
1992-01-01
This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.
Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon
2009-01-01
Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.
Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time
NASA Technical Reports Server (NTRS)
Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan
2012-01-01
Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Model Transformation for a System of Systems Dependability Safety Case
NASA Technical Reports Server (NTRS)
Murphy, Judy; Driskell, Stephen B.
2010-01-01
Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.
Technologies for unattended network operations
NASA Technical Reports Server (NTRS)
Jaworski, Allan; Odubiyi, Jide; Holdridge, Mark; Zuzek, John
1991-01-01
The necessary network management functions for a telecommunications, navigation and information management (TNIM) system in the framework of an extension of the ISO model for communications network management are described. Various technologies that could substantially reduce the need for TNIM network management, automate manpower intensive functions, and deal with synchronization and control at interplanetary distances are presented. Specific technologies addressed include the use of the ISO Common Management Interface Protocol, distributed artificial intelligence for network synchronization and fault management, and fault-tolerant systems engineering.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Gas Path On-line Fault Diagnostics Using a Nonlinear Integrated Model for Gas Turbine Engines
NASA Astrophysics Data System (ADS)
Lu, Feng; Huang, Jin-quan; Ji, Chun-sheng; Zhang, Dong-dong; Jiao, Hua-bin
2014-08-01
Gas turbine engine gas path fault diagnosis is closely related technology that assists operators in managing the engine units. However, the performance gradual degradation is inevitable due to the usage, and it result in the model mismatch and then misdiagnosis by the popular model-based approach. In this paper, an on-line integrated architecture based on nonlinear model is developed for gas turbine engine anomaly detection and fault diagnosis over the course of the engine's life. These two engine models have different performance parameter update rate. One is the nonlinear real-time adaptive performance model with the spherical square-root unscented Kalman filter (SSR-UKF) producing performance estimates, and the other is a nonlinear baseline model for the measurement estimates. The fault detection and diagnosis logic is designed to discriminate sensor fault and component fault. This integration architecture is not only aware of long-term engine health degradation but also effective to detect gas path performance anomaly shifts while the engine continues to degrade. Compared to the existing architecture, the proposed approach has its benefit investigated in the experiment and analysis.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei
2015-10-01
In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.
Modeling Off-Nominal Behavior in SysML
NASA Technical Reports Server (NTRS)
Day, John; Donahue, Kenny; Ingham, Mitch; Kadesch, Alex; Kennedy, Kit; Post, Ethan
2012-01-01
Fault Management is an essential part of the system engineering process that is limited in its effectiveness by the ad hoc nature of the applied approaches and methods. Providing a rigorous way to develop and describe off-nominal behavior is a necessary step in the improvement of fault management, and as a result, will enable safe, reliable and available systems even as system complexity increases... The basic concepts described in this paper provide a foundation to build a larger set of necessary concepts and relationships for precise modeling of off-nominal behavior, and a basis for incorporating these ideas into the overall systems engineering process.. The simple FMEA example provided applies the modeling patterns we have developed and illustrates how the information in the model can be used to reason about the system and derive typical fault management artifacts.. A key insight from the FMEA work was the utility of defining failure modes as the "inverse of intent", and deriving this from the behavior models.. Additional work is planned to extend these ideas and capabilities to other types of relevant information and additional products.
Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures
NASA Technical Reports Server (NTRS)
Poll, Scott; Iverson, David; Patterson-Hine, Ann
2003-01-01
Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
Reliability of Fault Tolerant Control Systems. Part 2
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2000-01-01
This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.
Failure detection and fault management techniques for flush airdata sensing systems
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.
A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity
NASA Astrophysics Data System (ADS)
Healy, D.; Rizzo, R. E.
2015-12-01
Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.
TWT transmitter fault prediction based on ANFIS
NASA Astrophysics Data System (ADS)
Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen
2017-11-01
Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.
Managing Space System Faults: Coalescing NASA's Views
NASA Technical Reports Server (NTRS)
Muirhead, Brian; Fesq, Lorraine
2012-01-01
Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.
Multi-Agent Diagnosis and Control of an Air Revitalization System for Life Support in Space
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Kowing, Jeffrey; Nieten, Joseph; Graham, Jeffrey s.; Schreckenghost, Debra; Bonasso, Pete; Fleming, Land D.; MacMahon, Matt; Thronesbery, Carroll
2000-01-01
An architecture of interoperating agents has been developed to provide control and fault management for advanced life support systems in space. In this adjustable autonomy architecture, software agents coordinate with human agents and provide support in novel fault management situations. This architecture combines the Livingstone model-based mode identification and reconfiguration (MIR) system with the 3T architecture for autonomous flexible command and control. The MIR software agent performs model-based state identification and diagnosis. MIR identifies novel recovery configurations and the set of commands required for the recovery. The AZT procedural executive and the human operator use the diagnoses and recovery recommendations, and provide command sequencing. User interface extensions have been developed to support human monitoring of both AZT and MIR data and activities. This architecture has been demonstrated performing control and fault management for an oxygen production system for air revitalization in space. The software operates in a dynamic simulation testbed.
Updating the USGS seismic hazard maps for Alaska
Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.
2015-01-01
The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1992-01-01
Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.
Reliability of Fault Tolerant Control Systems. Part 1
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.
Building a risk-targeted regional seismic hazard model for South-East Asia
NASA Astrophysics Data System (ADS)
Woessner, J.; Nyst, M.; Seyhan, E.
2015-12-01
The last decade has tragically shown the social and economic vulnerability of countries in South-East Asia to earthquake hazard and risk. While many disaster mitigation programs and initiatives to improve societal earthquake resilience are under way with the focus on saving lives and livelihoods, the risk management sector is challenged to develop appropriate models to cope with the economic consequences and impact on the insurance business. We present the source model and ground motions model components suitable for a South-East Asia earthquake risk model covering Indonesia, Malaysia, the Philippines and Indochine countries. The source model builds upon refined modelling approaches to characterize 1) seismic activity from geologic and geodetic data on crustal faults and 2) along the interface of subduction zones and within the slabs and 3) earthquakes not occurring on mapped fault structures. We elaborate on building a self-consistent rate model for the hazardous crustal fault systems (e.g. Sumatra fault zone, Philippine fault zone) as well as the subduction zones, showcase some characteristics and sensitivities due to existing uncertainties in the rate and hazard space using a well selected suite of ground motion prediction equations. Finally, we analyze the source model by quantifying the contribution by source type (e.g., subduction zone, crustal fault) to typical risk metrics (e.g.,return period losses, average annual loss) and reviewing their relative impact on various lines of businesses.
Artificial neural network application for space station power system fault diagnosis
NASA Technical Reports Server (NTRS)
Momoh, James A.; Oliver, Walter E.; Dias, Lakshman G.
1995-01-01
This study presents a methodology for fault diagnosis using a Two-Stage Artificial Neural Network Clustering Algorithm. Previously, SPICE models of a 5-bus DC power distribution system with assumed constant output power during contingencies from the DDCU were used to evaluate the ANN's fault diagnosis capabilities. This on-going study uses EMTP models of the components (distribution lines, SPDU, TPDU, loads) and power sources (DDCU) of Space Station Alpha's electrical Power Distribution System as a basis for the ANN fault diagnostic tool. The results from the two studies are contrasted. In the event of a major fault, ground controllers need the ability to identify the type of fault, isolate the fault to the orbital replaceable unit level and provide the necessary information for the power management expert system to optimally determine a degraded-mode load schedule. To accomplish these goals, the electrical power distribution system's architecture can be subdivided into three major classes: DC-DC converter to loads, DC Switching Unit (DCSU) to Main bus Switching Unit (MBSU), and Power Sources to DCSU. Each class which has its own electrical characteristics and operations, requires a unique fault analysis philosophy. This study identifies these philosophies as Riddles 1, 2 and 3 respectively. The results of the on-going study addresses Riddle-1. It is concluded in this study that the combination of the EMTP models of the DDCU, distribution cables and electrical loads yields a more accurate model of the behavior and in addition yielded more accurate fault diagnosis using ANN versus the results obtained with the SPICE models.
Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).
NASA Astrophysics Data System (ADS)
Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.
2017-04-01
The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.
QuakeSim: a Web Service Environment for Productive Investigations with Earth Surface Sensor Data
NASA Astrophysics Data System (ADS)
Parker, J. W.; Donnellan, A.; Granat, R. A.; Lyzenga, G. A.; Glasscoe, M. T.; McLeod, D.; Al-Ghanmi, R.; Pierce, M.; Fox, G.; Grant Ludwig, L.; Rundle, J. B.
2011-12-01
The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.
Automatic Fault Characterization via Abnormality-Enhanced Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Laguna, I; de Supinski, B R
Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig
2017-01-01
This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.
A Hybrid Stochastic-Neuro-Fuzzy Model-Based System for In-Flight Gas Turbine Engine Diagnostics
2001-04-05
Margin (ADM) and (ii) Fault Detection Margin (FDM). Key Words: ANFIS, Engine Health Monitoring , Gas Path Analysis, and Stochastic Analysis Adaptive Network...The paper illustrates the application of a hybrid Stochastic- Fuzzy -Inference Model-Based System (StoFIS) to fault diagnostics and prognostics for both...operational history monitored on-line by the engine health management (EHM) system. To capture the complex functional relationships between different
Halicioglu, Kerem; Ozener, Haluk
2008-01-01
Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE–SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters – standard strike-slip model of dislocation theory in an elastic half-space – is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems. PMID:27873783
Halicioglu, Kerem; Ozener, Haluk
2008-08-19
Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE-SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters - standard strike-slip model of dislocation theory in an elastic half-space - is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems.
A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.
2010-01-01
A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.
NASA Technical Reports Server (NTRS)
Rogers, William H.
1993-01-01
In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.
Mayer, Larry; Lu, Zhong
2001-01-01
A basic model incorporating satellite synthetic aperture radar (SAR) interferometry of the fault rupture zone that formed during the Kocaeli earthquake of August 17, 1999, documents the elastic rebound that resulted from the concomitant elastic strain release along the North Anatolian fault. For pure strike-slip faults, the elastic rebound function derived from SAR interferometry is directly invertible from the distribution of elastic strain on the fault at criticality, just before the critical shear stress was exceeded and the fault ruptured. The Kocaeli earthquake, which was accompanied by as much as ∼5 m of surface displacement, distributed strain ∼110 km around the fault prior to faulting, although most of it was concentrated in a narrower and asymmetric 10-km-wide zone on either side of the fault. The use of SAR interferometry to document the distribution of elastic strain at the critical condition for faulting is clearly a valuable tool, both for scientific investigation and for the effective management of earthquake hazard.
Product quality management based on CNC machine fault prognostics and diagnosis
NASA Astrophysics Data System (ADS)
Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.
2018-03-01
This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myrent, Noah J.; Barrett, Natalie C.; Adams, Douglas E.
2014-07-01
Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by themore » presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine availability, revenue, and overall profit.« less
ESRDC - Designing and Powering the Future Fleet
2018-02-22
Awards Management 301 Main Street University of South Carolina Columbia, SC 29208 1600 Hampton St, Suite 414 Phone: 803-777-7890 Columbia, SC 29208... managing short circuit faults in MVDC Systems, and 5) modeling of SiC-based electronic power converters to support accurate scalable models in S3D...Research in advanced thermal management followed three tracks. We developed models of thermal system components that are suitable for use in early stage
NASA Astrophysics Data System (ADS)
Gonzalez-Nicolas, A.; Cihan, A.; Birkholzer, J. T.; Petrusak, R.; Zhou, Q.; Riestenberg, D. E.; Trautz, R. C.; Godec, M.
2016-12-01
Industrial-scale injection of CO2 into the subsurface can cause reservoir pressure increases that must be properly controlled to prevent any potential environmental impact. Excessive pressure buildup in reservoir may result in ground water contamination stemming from leakage through conductive pathways, such as improperly plugged abandoned wells or distant faults, and the potential for fault reactivation and possibly seal breaching. Brine extraction is a viable approach for managing formation pressure, effective stress, and plume movement during industrial-scale CO2 injection projects. The main objectives of this study are to investigate suitable different pressure management strategies involving active brine extraction and passive pressure relief wells. Adaptive optimized management of CO2 storage projects utilizes the advanced automated optimization algorithms and suitable process models. The adaptive management integrates monitoring, forward modeling, inversion modeling and optimization through an iterative process. In this study, we employ an adaptive framework to understand primarily the effects of initial site characterization and frequency of the model update (calibration) and optimization calculations for controlling extraction rates based on the monitoring data on the accuracy and the success of the management without violating pressure buildup constraints in the subsurface reservoir system. We will present results of applying the adaptive framework to test appropriateness of different management strategies for a realistic field injection project.
Orion GN&C Fault Management System Verification: Scope And Methodology
NASA Technical Reports Server (NTRS)
Brown, Denise; Weiler, David; Flanary, Ronald
2016-01-01
In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.
NASA Astrophysics Data System (ADS)
Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.
2017-12-01
It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.
Fault recovery characteristics of the fault tolerant multi-processor
NASA Technical Reports Server (NTRS)
Padilla, Peter A.
1990-01-01
The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-06-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-01-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290
NASA Technical Reports Server (NTRS)
Abbott, Kathy
1990-01-01
The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to examine pilot mental models of the aircraft subsystems and their use in diagnosis tasks. Future research plans include piloted simulation evaluation of the diagnosis decision aiding concepts and crew interface issues. Information is given in viewgraph form.
NASA Technical Reports Server (NTRS)
Freeman, Kenneth A.; Walsh, Rick; Weeks, David J.
1988-01-01
Space Station issues in fault management are discussed. The system background is described with attention given to design guidelines and power hardware. A contractually developed fault management system, FRAMES, is integrated with the energy management functions, the control switchgear, and the scheduling and operations management functions. The constraints that shaped the FRAMES system and its implementation are considered.
GIS-based identification of active lineaments within the Krasnokamensk Area, Transbaikalia, Russia
NASA Astrophysics Data System (ADS)
Petrov, V. A.; Lespinasse, M.; Ustinov, S. A.; Cialec, C.
2017-07-01
Lineament analysis was carried out using detailed digital elevation models (DEM) of the Krasnokamensk Area, southeastern Transbaikalia (Russia). The results of this research confirm the presence of already known faults, but also identify unknown fault zones. The primary focus was identifying small discontinuities and their relationship with extended fault zones. The developed technique allowed construction and identification of the active lineaments with their orientation of the compression and expansion axes in the horizontal plane, their direction of shear movement (right or left), and their geodynamic setting of formation (compression or stretching). The results of active faults identification and definition of their kinematics on digital elevation models were confirmed by measuring the velocities and directions of modern horizontal surface motions using a geodesic GPS, as well as identifying the principal stress axes directions of the modern stress field using modern-day earthquake data. The obtained results are deemed necessary for proper rational environmental management decisions.
NASA Technical Reports Server (NTRS)
Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig
2018-01-01
This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.
Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.
2010-05-30
Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models aremore » imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.« less
NASA Technical Reports Server (NTRS)
Sweet, Adam
2008-01-01
The IVHM Project in the Aviation Safety Program has funded research in electrical power system (EPS) health management. This problem domain contains both discrete and continuous behavior, and thus is directly relevant for the hybrid diagnostic tool HyDE. In FY2007 work was performed to expand the HyDE diagnosis model of the ADAPT system. The work completed resulted in a HyDE model with the capability to diagnose five times the number of ADAPT components previously tested. The expanded diagnosis model passed a corresponding set of new ADAPT fault injection scenario tests with no incorrect faults reported. The time required for the HyDE diagnostic system to isolate the fault varied widely between tests; this variance was reduced by tuning HyDE input parameters. These results and other diagnostic design trade-offs are discussed. Finally, possible future improvements for both the HyDE diagnostic model and HyDE itself are presented.
NASA Technical Reports Server (NTRS)
Padilla, Peter A.
1991-01-01
An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.
Simulation of demand-response power management in smart city
NASA Astrophysics Data System (ADS)
Kadam, Kshitija
Smart Grids manage energy efficiently through intelligent monitoring and control of all the components connected to the electrical grid. Advanced digital technology, combined with sensors and power electronics, can greatly improve transmission line efficiency. This thesis proposed a model of a deregulated grid which supplied power to diverse set of consumers and allowed them to participate in decision making process through two-way communication. The deregulated market encourages competition at the generation and distribution levels through communication with the central system operator. A software platform was developed and executed to manage the communication, as well for energy management of the overall system. It also demonstrated self-healing property of the system in case a fault occurs, resulting in an outage. The system not only recovered from the fault but managed to do so in a short time with no/minimum human involvement.
NASA Technical Reports Server (NTRS)
Shontz, W. D.; Records, R. M.; Antonelli, D. R.
1992-01-01
The focus of this project is on alerting pilots to impending events in such a way as to provide the additional time required for the crew to make critical decisions concerning non-normal operations. The project addresses pilots' need for support in diagnosis and trend monitoring of faults as they affect decisions that must be made within the context of the current flight. Monitoring and diagnostic modules developed under the NASA Faultfinder program were restructured and enhanced using input data from an engine model and real engine fault data. Fault scenarios were prepared to support knowledge base development activities on the MONITAUR and DRAPhyS modules of Faultfinder. An analysis of the information requirements for fault management was included in each scenario. A conceptual framework was developed for systematic evaluation of the impact of context variables on pilot action alternatives as a function of event/fault combinations.
A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat
NASA Technical Reports Server (NTRS)
Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark
2013-01-01
NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy
NASA Spacecraft Fault Management Workshop Results
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen
2010-01-01
Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the findings and recommendations from that workshop, as well as opportunities identified for future investment in tools, processes, and products to facilitate the development of space flight fault management capabilities.
Fault management and systems knowledge
DOT National Transportation Integrated Search
2016-12-01
Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...
Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.
2010-01-01
Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the findings and recommendations from that workshop, particularly as fault management development issues affect operations and the development of operations capabilities.
On the design of fault-tolerant robotic manipulator systems
NASA Technical Reports Server (NTRS)
Tesar, Delbert
1993-01-01
Robotic systems are finding increasing use in space applications. Many of these devices are going to be operational on board the Space Station Freedom. Fault tolerance has been deemed necessary because of the criticality of the tasks and the inaccessibility of the systems to maintenance and repair. Design for fault tolerance in manipulator systems is an area within robotics that is without precedence in the literature. In this paper, we will attempt to lay down the foundations for such a technology. Design for fault tolerance demands new and special approaches to design, often at considerable variance from established design practices. These design aspects, together with reliability evaluation and modeling tools, are presented. Mechanical architectures that employ protective redundancies at many levels and have a modular architecture are then studied in detail. Once a mechanical architecture for fault tolerance has been derived, the chronological stages of operational fault tolerance are investigated. Failure detection, isolation, and estimation methods are surveyed, and such methods for robot sensors and actuators are derived. Failure recovery methods are also presented for each of the protective layers of redundancy. Failure recovery tactics often span all of the layers of a control hierarchy. Thus, a unified framework for decision-making and control, which orchestrates both the nominal redundancy management tasks and the failure management tasks, has been derived. The well-developed field of fault-tolerant computers is studied next, and some design principles relevant to the design of fault-tolerant robot controllers are abstracted. Conclusions are drawn, and a road map for the design of fault-tolerant manipulator systems is laid out with recommendations for a 10 DOF arm with dual actuators at each joint.
NASA Technical Reports Server (NTRS)
Ricks, Brian W.; Mengshoel, Ole J.
2009-01-01
Reliable systems health management is an important research area of NASA. A health management system that can accurately and quickly diagnose faults in various on-board systems of a vehicle will play a key role in the success of current and future NASA missions. We introduce in this paper the ProDiagnose algorithm, a diagnostic algorithm that uses a probabilistic approach, accomplished with Bayesian Network models compiled to Arithmetic Circuits, to diagnose these systems. We describe the ProDiagnose algorithm, how it works, and the probabilistic models involved. We show by experimentation on two Electrical Power Systems based on the ADAPT testbed, used in the Diagnostic Challenge Competition (DX 09), that ProDiagnose can produce results with over 96% accuracy and less than 1 second mean diagnostic time.
NASA Astrophysics Data System (ADS)
Hannis, Sarah; Bricker, Stephanie; Williams, John
2013-04-01
The Bunter Sandstone Formation in the Southern North Sea is a potential reservoir being considered for carbon dioxide storage as a climate change mitigation option. A geological model of a putative storage site within this saline aquifer was built from 3D seismic and well data to investigate potential reservoir pressure changes and their effects on fault movement, brine and CO2 migration as a result of CO2 injection. The model is located directly beneath the Dogger Bank Special Area of Conservation, close to the UK-Netherlands median line. Analysis of the seismic data reveals two large fault zones, one in each of the UK and Netherlands sectors, many tens of kilometres in length, extending from reservoir level to the sea bed. Although it has been shown that similar faults compartmentalise gas fields elsewhere in the Netherlands sector, significant uncertainty remains surrounding the properties of the faults in our model area; in particular their cross- and along-fault permeability and geomechanical behaviour. Despite lying outside the anticipated CO2 plume, these faults could provide potential barriers to pore fluid migration and pressure dissipation, until, under elevated pressures, they provide vertical migration pathways for brine. In this case, the faults will act to enhance injectivity, but potential environmental impacts, should the displaced brine be expelled at the sea bed, will require consideration. Pressure gradients deduced from regional leak-off test data have been input into a simple geomechanical model to estimate the threshold pressure gradient at which faults cutting the Mesozoic succession will fail, assuming reactivation of fault segments will cause an increase in vertical permeability. Various 4D scenarios were run using a single-phase groundwater modelling code, calibrated to results from a multi-phase commercial simulator. Possible end-member ranges of fault parameters were input to investigate the pressure change with time and quantify brine flux to the seabed in potentially reactivated sections of each fault zone. Combining the modelled pressure field with the calculated fault failure criterion suggests that only the fault in the Netherlands sector reactivates, allowing brine displacement at a maximum rate of 800 - 900 m3/d. Model results indicate that the extent of brine displacement is most sensitive to the fault reactivation pressure gradient and fault zone thickness. In conclusion, CO2 injection into a saline aquifer results in a significant increase in pore-fluid pressure gradients. In this case, brine displacement along faults acting as pressure relief valves could increase injectivity in a similar manner to pressure management wells, thereby facilitating the storage operation. However, if the faults act as brine migration pathways, an understanding of seabed flux rates and environmental impacts will need to be demonstrated to regulators prior to injection. This study, close to an international border, also highlights the need to inform neighbouring countries authorities of proposed operations and, potentially, to obtain licences to increase reservoir pressure and/or displace brine across international borders.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
The Development of NASA's Fault Management Handbook
NASA Technical Reports Server (NTRS)
Fesq, Lorraine
2011-01-01
Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns: (1) Often faults aren't addressed until nominal spacecraft design is fairly stable. (2) Design relegated to after-the-fact patchwork, Band-Aid approach. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition. Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions. New approaches could avoid many current pitfalls (3a) New FM architectures, including model-based approach integrated with NASA's MBSE (Model-Based System Engineering) efforts (3b) NASA's Office of the Chief Technologist: FM identified in seven of NASA's 14 Space Technology Roadmaps. Opportunity to coalesce and establish thrust area to progressively develop new FM techniques. FM Handbook will help ensure that future missions do not encounter same FM-related problems as previous missions. Version 1 of the FM Handbook is a good start: (1) Still need Version 2 Agency-wide FM Handbook to expand Handbook to other areas, especially crewed missions. (2) Still need to reach out to other organizations to develop common understanding and vocabulary. Handbook doesn't/can't address all Workshop recommendations. Still need to identify how to address programmatic and infrastructure issues.
Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Nyst, M.
2013-12-01
We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.
Partitioning in Avionics Architectures: Requirements, Mechanisms, and Assurance
NASA Technical Reports Server (NTRS)
Rushby, John
1999-01-01
Automated aircraft control has traditionally been divided into distinct "functions" that are implemented separately (e.g., autopilot, autothrottle, flight management); each function has its own fault-tolerant computer system, and dependencies among different functions are generally limited to the exchange of sensor and control data. A by-product of this "federated" architecture is that faults are strongly contained within the computer system of the function where they occur and cannot readily propagate to affect the operation of other functions. More modern avionics architectures contemplate supporting multiple functions on a single, shared, fault-tolerant computer system where natural fault containment boundaries are less sharply defined. Partitioning uses appropriate hardware and software mechanisms to restore strong fault containment to such integrated architectures. This report examines the requirements for partitioning, mechanisms for their realization, and issues in providing assurance for partitioning. Because partitioning shares some concerns with computer security, security models are reviewed and compared with the concerns of partitioning.
Triggering of destructive earthquakes in El Salvador
NASA Astrophysics Data System (ADS)
Martínez-Díaz, José J.; Álvarez-Gómez, José A.; Benito, Belén; Hernández, Douglas
2004-01-01
We investigate the existence of a mechanism of static stress triggering driven by the interaction of normal faults in the Middle American subduction zone and strike-slip faults in the El Salvador volcanic arc. The local geology points to a large strike-slip fault zone, the El Salvador fault zone, as the source of several destructive earthquakes in El Salvador along the volcanic arc. We modeled the Coulomb failure stress (CFS) change produced by the June 1982 and January 2001 subduction events on planes parallel to the El Salvador fault zone. The results have broad implications for future risk management in the region, as they suggest a causative relationship between the position of the normal-slip events in the subduction zone and the strike-slip events in the volcanic arc. After the February 2001 event, an important area of the El Salvador fault zone was loaded with a positive change in Coulomb failure stress (>0.15 MPa). This scenario must be considered in the seismic hazard assessment studies that will be carried out in this area.
Coordinated Fault-Tolerance for High-Performance Computing Final Project Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panda, Dhabaleswar Kumar; Beckman, Pete
2011-07-28
With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less
Intelligent Operation and Maintenance of Micro-grid Technology and System Development
NASA Astrophysics Data System (ADS)
Fu, Ming; Song, Jinyan; Zhao, Jingtao; Du, Jian
2018-01-01
In order to achieve the micro-grid operation and management, Studying the micro-grid operation and maintenance knowledge base. Based on the advanced Petri net theory, the fault diagnosis model of micro-grid is established, and the intelligent diagnosis and analysis method of micro-grid fault is put forward. Based on the technology, the functional system and architecture of the intelligent operation and maintenance system of micro-grid are studied, and the microcomputer fault diagnosis function is introduced in detail. Finally, the system is deployed based on the micro-grid of a park, and the micro-grid fault diagnosis and analysis is carried out based on the micro-grid operation. The system operation and maintenance function interface is displayed, which verifies the correctness and reliability of the system.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to provide formal connectivity between the nominal (SE), and off-nominal (SHM and FM) aspects of functions and designs. This paper describes a formal modeling approach to the initial phases of the development process that integrates the nominal and off-nominal perspectives in a model that unites SE goals and functions of with the failure to achieve goals and functions (SHM/FM).
NEXT Single String Integration Test Results
NASA Technical Reports Server (NTRS)
Soulas, George C.; Patterson, Michael J.; Pinero, Luis; Herman, Daniel A.; Snyder, Steven John
2010-01-01
As a critical part of NASA's Evolutionary Xenon Thruster (NEXT) test validation process, a single string integration test was performed on the NEXT ion propulsion system. The objectives of this test were to verify that an integrated system of major NEXT ion propulsion system elements meets project requirements, to demonstrate that the integrated system is functional across the entire power processor and xenon propellant management system input ranges, and to demonstrate to potential users that the NEXT propulsion system is ready for transition to flight. Propulsion system elements included in this system integration test were an engineering model ion thruster, an engineering model propellant management system, an engineering model power processor unit, and a digital control interface unit simulator that acted as a test console. Project requirements that were verified during this system integration test included individual element requirements ; integrated system requirements, and fault handling. This paper will present the results of these tests, which include: integrated ion propulsion system demonstrations of performance, functionality and fault handling; a thruster re-performance acceptance test to establish baseline performance: a risk-reduction PMS-thruster integration test: and propellant management system calibration checks.
Software Health Management with Bayesian Networks
NASA Technical Reports Server (NTRS)
Mengshoel, Ole; Schumann, JOhann
2011-01-01
Most modern aircraft as well as other complex machinery is equipped with diagnostics systems for its major subsystems. During operation, sensors provide important information about the subsystem (e.g., the engine) and that information is used to detect and diagnose faults. Most of these systems focus on the monitoring of a mechanical, hydraulic, or electromechanical subsystem of the vehicle or machinery. Only recently, health management systems that monitor software have been developed. In this paper, we will discuss our approach of using Bayesian networks for Software Health Management (SWHM). We will discuss SWHM requirements, which make advanced reasoning capabilities for the detection and diagnosis important. Then we will present our approach to using Bayesian networks for the construction of health models that dynamically monitor a software system and is capable of detecting and diagnosing faults.
Wu, Zhenyu; Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang
2018-04-05
Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods.
Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang
2018-01-01
Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods. PMID:29621131
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2010-01-01
Aircraft engine performance trend monitoring and gas path fault diagnostics are closely related technologies that assist operators in managing the health of their gas turbine engine assets. Trend monitoring is the process of monitoring the gradual performance change that an aircraft engine will naturally incur over time due to turbomachinery deterioration, while gas path diagnostics is the process of detecting and isolating the occurrence of any faults impacting engine flow-path performance. Today, performance trend monitoring and gas path fault diagnostic functions are performed by a combination of on-board and off-board strategies. On-board engine control computers contain logic that monitors for anomalous engine operation in real-time. Off-board ground stations are used to conduct fleet-wide engine trend monitoring and fault diagnostics based on data collected from each engine each flight. Continuing advances in avionics are enabling the migration of portions of the ground-based functionality on-board, giving rise to more sophisticated on-board engine health management capabilities. This paper reviews the conventional engine performance trend monitoring and gas path fault diagnostic architecture commonly applied today, and presents a proposed enhanced on-board architecture for future applications. The enhanced architecture gains real-time access to an expanded quantity of engine parameters, and provides advanced on-board model-based estimation capabilities. The benefits of the enhanced architecture include the real-time continuous monitoring of engine health, the early diagnosis of fault conditions, and the estimation of unmeasured engine performance parameters. A future vision to advance the enhanced architecture is also presented and discussed
A New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.
2017-12-01
We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.
A PC based fault diagnosis expert system
NASA Technical Reports Server (NTRS)
Marsh, Christopher A.
1990-01-01
The Integrated Status Assessment (ISA) prototype expert system performs system level fault diagnosis using rules and models created by the user. The ISA evolved from concepts to a stand-alone demonstration prototype using OPS5 on a LISP Machine. The LISP based prototype was rewritten in C and the C Language Integrated Production System (CLIPS) to run on a Personal Computer (PC) and a graphics workstation. The ISA prototype has been used to demonstrate fault diagnosis functions of Space Station Freedom's Operation Management System (OMS). This paper describes the development of the ISA prototype from early concepts to the current PC/workstation version used today and describes future areas of development for the prototype.
Developing a Hayward Fault Greenbelt in Fremont, California
NASA Astrophysics Data System (ADS)
Blueford, J. R.
2007-12-01
The Math Science Nucleus, an educational non-profit, in cooperation with the City of Fremont and U.S. Geological Survey has concluded that outdoor and indoor exhibits highlighting the Hayward Fault is a spectacular and educational way of illustrating the power of earthquakes. Several projects are emerging that use the Hayward fault to illustrate to the public and school groups that faults mold the landscape upon which they live. One area that is already developed, Tule Ponds at Tyson Lagoon, is owned by Alameda County Flood Control and Conservation District and managed by the Math Science Nucleus. This 17 acre site illustrates two traces of the Hayward fault (active and inactive), whose sediments record over 4000 years of activity. Another project is selecting an area in Fremont that a permanent trench or outside earthquake exhibit can be created that people can see seismic stratigraphic features of the Hayward Fault. This would be part of a 3 mile Earthquake Greenbelt area from Tyson Lagoon to the proposed Irvington BART Station. Informational kiosks or markers and a "yellow brick road" of earthquake facts could allow visitors to take an exciting and educational tour of the Hayward Fault's surface features in Fremont. Visitors would visually see the effects of fault movement and the tours would include preparedness information. As these plans emerge, an indoor permanent exhibits is being developed at the Children's Natural History Museum in Fremont. This exhibit will be a model of the Earthquake Greenbelt. It will also allow people to see a scale model of how the Hayward Fault unearthed the Pleistocene fossil bed (Irvingtonian) as well as created traps for underground aquifers as well as surface sag ponds.
NASA Astrophysics Data System (ADS)
Goto, J.; Miwa, T.; Tsuchi, H.; Karasaki, K.
2009-12-01
The Nuclear Waste Management Organization of Japan (NUMO), after volunteering municipalities arise, will start a three-staged program for selecting a HLW and TRU waste repository site. It is recognized from experiences from various site characterization programs in the world that the hydrologic property of faults is one of the most important parameters in the early stage of the program. It is expected that numerous faults of interest exist in an investigation area of several tens of square kilometers. It is, however, impossible to characterize all these faults in a limited time and budget. This raises problems in the repository designing and safety assessment that we may have to accept unrealistic or over conservative results by using a single model or parameters for all the faults in the area. We, therefore, seek to develop an efficient and practical methodology to characterize hydrologic property of faults. This project is a five year program started in 2007, and comprises the basic methodology development through literature study and its verification through field investigations. The literature study tries to classify faults by correlating their geological features with hydraulic property, to search for the most efficient technology for fault characterization, and to develop a work flow diagram. The field investigation starts from selection of a site and fault(s), followed by existing site data analyses, surface geophysics, geological mapping, trenching, water sampling, a series of borehole investigations and modeling/analyses. Based on the results of the field investigations, we plan to develop a systematic hydrologic characterization methodology of faults. A classification method that correlates combinations of geological features (rock type, fault displacement, fault type, position in a fault zone, fracture zone width, damage zone width) with widths of high permeability zones around a fault zone was proposed through a survey on available documents of the site characterization programs. The field investigation started in 2008, by selecting the Wildcat Fault that cut across the Laurence Berkeley National Laboratory (LBNL) site as the target. Analyses on site-specific data, surface geophysics, geological mapping and trenching have confirmed the approximate location and characteristics of the fault (see Session H48, Onishi, et al). The plan for the remaining years includes borehole investigations at LBNL, and another series of investigations in the northern part of the Wildcat Fault.
Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611
Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing
2006-09-01
Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant
Analytical Approaches to Guide SLS Fault Management (FM) Development
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.
2012-01-01
Extensive analysis is needed to determine the right set of FM capabilities to provide the most coverage without significantly increasing the cost, reliability (FP/FN), and complexity of the overall vehicle systems. Strong collaboration with the stakeholders is required to support the determination of the best triggers and response options. The SLS Fault Management process has been documented in the Space Launch System Program (SLSP) Fault Management Plan (SLS-PLAN-085).
A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gossman, W. E.
1986-01-01
A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.
Toward Building a New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.
2015-12-01
At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.
Model-based diagnostics for Space Station Freedom
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Martin, Eric R.; Lerutte, Marcel G.
1991-01-01
An innovative approach to fault management was recently demonstrated for the NASA LeRC Space Station Freedom (SSF) power system testbed. This project capitalized on research in model-based reasoning, which uses knowledge of a system's behavior to monitor its health. The fault management system (FMS) can isolate failures online, or in a post analysis mode, and requires no knowledge of failure symptoms to perform its diagnostics. An in-house tool called MARPLE was used to develop and run the FMS. MARPLE's capabilities are similar to those available from commercial expert system shells, although MARPLE is designed to build model-based as opposed to rule-based systems. These capabilities include functions for capturing behavioral knowledge, a reasoning engine that implements a model-based technique known as constraint suspension, and a tool for quickly generating new user interfaces. The prototype produced by applying MARPLE to SSF not only demonstrated that model-based reasoning is a valuable diagnostic approach, but it also suggested several new applications of MARPLE, including an integration and testing aid, and a complement to state estimation.
Earthquake sequence simulations with measured properties for JFAST core samples
NASA Astrophysics Data System (ADS)
Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro
2017-08-01
Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a-b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.
Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Oostdyk, Rebecca
2010-01-01
The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project
Earthquake sequence simulations with measured properties for JFAST core samples.
Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro
2017-09-28
Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a - b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space.This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'. © 2017 The Author(s).
NASA Technical Reports Server (NTRS)
Truong, Long V.; Walters, Jerry L.; Roth, Mary Ellen; Quinn, Todd M.; Krawczonek, Walter M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control to the Space Station Freedom Electrical Power System (SSF/EPS) testbed being developed and demonstrated at NASA Lewis Research Center. The objectives of the program are to establish artificial intelligence technology paths, to craft knowledge-based tools with advanced human-operator interfaces for power systems, and to interface and integrate knowledge-based systems with conventional controllers. The Autonomous Power EXpert (APEX) portion of the APS program will integrate a knowledge-based fault diagnostic system and a power resource planner-scheduler. Then APEX will interface on-line with the SSF/EPS testbed and its Power Management Controller (PMC). The key tasks include establishing knowledge bases for system diagnostics, fault detection and isolation analysis, on-line information accessing through PMC, enhanced data management, and multiple-level, object-oriented operator displays. The first prototype of the diagnostic expert system for fault detection and isolation has been developed. The knowledge bases and the rule-based model that were developed for the Power Distribution Control Unit subsystem of the SSF/EPS testbed are described. A corresponding troubleshooting technique is also described.
NASA Astrophysics Data System (ADS)
Barba, M.; Rains, C.; von Dassow, W.; Parker, J. W.; Glasscoe, M. T.
2013-12-01
Knowing the location and behavior of active faults is essential for earthquake hazard assessment and disaster response. In Interferometric Synthetic Aperture Radar (InSAR) images, faults are revealed as linear discontinuities. Currently, interferograms are manually inspected to locate faults. During the summer of 2013, the NASA-JPL DEVELOP California Disasters team contributed to the development of a method to expedite fault detection in California using remote-sensing technology. The team utilized InSAR images created from polarimetric L-band data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) project. A computer-vision technique known as 'edge-detection' was used to automate the fault-identification process. We tested and refined an edge-detection algorithm under development through NASA's Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) project. To optimize the algorithm we used both UAVSAR interferograms and synthetic interferograms generated through Disloc, a web-based modeling program available through NASA's QuakeSim project. The edge-detection algorithm detected seismic, aseismic, and co-seismic slip along faults that were identified and compared with databases of known fault systems. Our optimization process was the first step toward integration of the edge-detection code into E-DECIDER to provide decision support for earthquake preparation and disaster management. E-DECIDER partners that will use the edge-detection code include the California Earthquake Clearinghouse and the US Department of Homeland Security through delivery of products using the Unified Incident Command and Decision Support (UICDS) service. Through these partnerships, researchers, earthquake disaster response teams, and policy-makers will be able to use this new methodology to examine the details of ground and fault motions for moderate to large earthquakes. Following an earthquake, the newly discovered faults can be paired with infrastructure overlays, allowing emergency response teams to identify sites that may have been exposed to damage. The faults will also be incorporated into a database for future integration into fault models and earthquake simulations, improving future earthquake hazard assessment. As new faults are mapped, they will further understanding of the complex fault systems and earthquake hazards within the seismically dynamic state of California.
The Development of NASA's Fault Management Handbook
NASA Technical Reports Server (NTRS)
Fesq, Lorraine
2011-01-01
Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition . Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions . New approaches could avoid many current pitfalls (3a) New FM architectures, including model ]based approach integrated with NASA fs MBSE efforts (3b) NASA fs Office of the Chief Technologist: FM identified in seven of NASA fs 14 Space Technology Roadmaps . opportunity to coalesce and establish thrust area to progressively develop new FM techniques FM Handbook will help ensure that future missions do not encounter same FM ]related problems as previous missions Version 1 of the FM Handbook is a good start.
Toward a Model-Based Approach to Flight System Fault Protection
NASA Technical Reports Server (NTRS)
Day, John; Murray, Alex; Meakin, Peter
2012-01-01
Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.
Advanced building energy management system demonstration for Department of Defense buildings.
O'Neill, Zheng; Bailey, Trevor; Dong, Bing; Shashanka, Madhusudana; Luo, Dong
2013-08-01
This paper presents an advanced building energy management system (aBEMS) that employs advanced methods of whole-building performance monitoring combined with statistical methods of learning and data analysis to enable identification of both gradual and discrete performance erosion and faults. This system assimilated data collected from multiple sources, including blueprints, reduced-order models (ROM) and measurements, and employed advanced statistical learning algorithms to identify patterns of anomalies. The results were presented graphically in a manner understandable to facilities managers. A demonstration of aBEMS was conducted in buildings at Naval Station Great Lakes. The facility building management systems were extended to incorporate the energy diagnostics and analysis algorithms, producing systematic identification of more efficient operation strategies. At Naval Station Great Lakes, greater than 20% savings were demonstrated for building energy consumption by improving facility manager decision support to diagnose energy faults and prioritize alternative, energy-efficient operation strategies. The paper concludes with recommendations for widespread aBEMS success. © 2013 New York Academy of Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, M.; Grimshaw, A.
1996-12-31
The Legion project at the University of Virginia is an architecture for designing and building system services that provide the illusion of a single virtual machine to users, a virtual machine that provides secure shared object and shared name spaces, application adjustable fault-tolerance, improved response time, and greater throughput. Legion targets wide area assemblies of workstations, supercomputers, and parallel supercomputers, Legion tackles problems not solved by existing workstation based parallel processing tools; the system will enable fault-tolerance, wide area parallel processing, inter-operability, heterogeneity, a single global name space, protection, security, efficient scheduling, and comprehensive resource management. This paper describes themore » core Legion object model, which specifies the composition and functionality of Legion`s core objects-those objects that cooperate to create, locate, manage, and remove objects in the Legion system. The object model facilitates a flexible extensible implementation, provides a single global name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects.« less
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Resilience by Design: Bringing Science to Policy Makers
Jones, Lucile M.
2015-01-01
No one questions that Los Angeles has an earthquake problem. The “Big Bend” of the San Andreas fault in southern California complicates the plate boundary between the North American and Pacific plates, creating a convergent component to the primarily transform boundary. The Southern California Earthquake Center Community Fault Model has over 150 fault segments, each capable of generating a damaging earthquake, in an area with more than 23 million residents (Fig. 1). A Federal Emergency Management Agency (FEMA) analysis of the expected losses from all future earthquakes in the National Seismic Hazard Maps (Petersen et al., 2014) predicts an annual average of more than $3 billion per year in the eight counties of southern California, with half of those losses in Los Angeles County alone (Federal Emergency Management Agency [FEMA], 2008). According to Swiss Re, one of the world’s largest reinsurance companies, Los Angeles faces one of the greatest risks of catastrophic losses from earthquakes of any city in the world, eclipsed only by Tokyo, Jakarta, and Manila (Swiss Re, 2013).
Methods for Probabilistic Fault Diagnosis: An Electrical Power System Case Study
NASA Technical Reports Server (NTRS)
Ricks, Brian W.; Mengshoel, Ole J.
2009-01-01
Health management systems that more accurately and quickly diagnose faults that may occur in different technical systems on-board a vehicle will play a key role in the success of future NASA missions. We discuss in this paper the diagnosis of abrupt continuous (or parametric) faults within the context of probabilistic graphical models, more specifically Bayesian networks that are compiled to arithmetic circuits. This paper extends our previous research, within the same probabilistic setting, on diagnosis of abrupt discrete faults. Our approach and diagnostic algorithm ProDiagnose are domain-independent; however we use an electrical power system testbed called ADAPT as a case study. In one set of ADAPT experiments, performed as part of the 2009 Diagnostic Challenge, our system turned out to have the best performance among all competitors. In a second set of experiments, we show how we have recently further significantly improved the performance of the probabilistic model of ADAPT. While these experiments are obtained for an electrical power system testbed, we believe they can easily be transitioned to real-world systems, thus promising to increase the success of future NASA missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
2015-12-31
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
Automated fault-management in a simulated spaceflight micro-world
NASA Technical Reports Server (NTRS)
Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja
2002-01-01
BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
A System for Fault Management for NASA's Deep Space Habitat
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.
2013-01-01
NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binh T. Pham; Nancy J. Lybeck; Vivek Agarwal
The Light Water Reactor Sustainability program at Idaho National Laboratory is actively conducting research to develop and demonstrate online monitoring capabilities for active components in existing nuclear power plants. Idaho National Laboratory and the Electric Power Research Institute are working jointly to implement a pilot project to apply these capabilities to emergency diesel generators and generator step-up transformers. The Electric Power Research Institute Fleet-Wide Prognostic and Health Management Software Suite will be used to implement monitoring in conjunction with utility partners: Braidwood Generating Station (owned by Exelon Corporation) for emergency diesel generators, and Shearon Harris Nuclear Generating Station (owned bymore » Duke Energy Progress) for generator step-up transformers. This report presents monitoring techniques, fault signatures, and diagnostic and prognostic models for emergency diesel generators. Emergency diesel generators provide backup power to the nuclear power plant, allowing operation of essential equipment such as pumps in the emergency core coolant system during catastrophic events, including loss of offsite power. Technical experts from Braidwood are assisting Idaho National Laboratory and Electric Power Research Institute in identifying critical faults and defining fault signatures associated with each fault. The resulting diagnostic models will be implemented in the Fleet-Wide Prognostic and Health Management Software Suite and tested using data from Braidwood. Parallel research on generator step-up transformers was summarized in an interim report during the fourth quarter of fiscal year 2012.« less
Online Monitoring of Induction Motors
DOE Office of Scientific and Technical Information (OSTI.GOV)
McJunkin, Timothy R.; Agarwal, Vivek; Lybeck, Nancy Jean
2016-01-01
The online monitoring of active components project, under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program, researched diagnostic and prognostic models for alternating current induction motors (IM). Idaho National Laboratory (INL) worked with the Electric Power Research Institute (EPRI) to augment and revise the fault signatures previously implemented in the Asset Fault Signature Database of EPRI’s Fleet Wide Prognostic and Health Management (FW PHM) Suite software. Induction Motor diagnostic models were researched using the experimental data collected by Idaho State University. Prognostic models were explored in the set of literature and through amore » limited experiment with 40HP to seek the Remaining Useful Life Database of the FW PHM Suite.« less
A Mode-Shape-Based Fault Detection Methodology for Cantilever Beams
NASA Technical Reports Server (NTRS)
Tejada, Arturo
2009-01-01
An important goal of NASA's Internal Vehicle Health Management program (IVHM) is to develop and verify methods and technologies for fault detection in critical airframe structures. A particularly promising new technology under development at NASA Langley Research Center is distributed Bragg fiber optic strain sensors. These sensors can be embedded in, for instance, aircraft wings to continuously monitor surface strain during flight. Strain information can then be used in conjunction with well-known vibrational techniques to detect faults due to changes in the wing's physical parameters or to the presence of incipient cracks. To verify the benefits of this technology, the Formal Methods Group at NASA LaRC has proposed the use of formal verification tools such as PVS. The verification process, however, requires knowledge of the physics and mathematics of the vibrational techniques and a clear understanding of the particular fault detection methodology. This report presents a succinct review of the physical principles behind the modeling of vibrating structures such as cantilever beams (the natural model of a wing). It also reviews two different classes of fault detection techniques and proposes a particular detection method for cracks in wings, which is amenable to formal verification. A prototype implementation of these methods using Matlab scripts is also described and is related to the fundamental theoretical concepts.
Spatial modeling for estimation of earthquakes economic loss in West Java
NASA Astrophysics Data System (ADS)
Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma
2017-07-01
Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.
Model Transformation for a System of Systems Dependability Safety Case
NASA Technical Reports Server (NTRS)
Murphy, Judy; Driskell, Steve
2011-01-01
The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.
NASA Technical Reports Server (NTRS)
Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily
2009-01-01
Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
NASA Astrophysics Data System (ADS)
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.
Real-time automated failure identification in the Control Center Complex (CCC)
NASA Technical Reports Server (NTRS)
Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James
1993-01-01
A system which will provide real-time failure management support to the Space Station Freedom program is described. The system's use of a simplified form of model based reasoning qualifies it as an advanced automation system. However, it differs from most such systems in that it was designed from the outset to meet two sets of requirements. First, it must provide a useful increment to the fault management capabilities of the Johnson Space Center (JSC) Control Center Complex (CCC) Fault Detection Management system. Second, it must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation, etc. The need to meet both requirement sets presents a much greater design challenge than would have been the case had functionality been the sole design consideration. The choice of technology, discussing aspects of that choice and the process for migrating it into the control center is overviewed.
Implementation of Integrated System Fault Management Capability
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark
2008-01-01
Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.
Health Management Applications for International Space Station
NASA Technical Reports Server (NTRS)
Alena, Richard; Duncavage, Dan
2005-01-01
Traditional mission and vehicle management involves teams of highly trained specialists monitoring vehicle status and crew activities, responding rapidly to any anomalies encountered during operations. These teams work from the Mission Control Center and have access to engineering support teams with specialized expertise in International Space Station (ISS) subsystems. Integrated System Health Management (ISHM) applications can significantly augment these capabilities by providing enhanced monitoring, prognostic and diagnostic tools for critical decision support and mission management. The Intelligent Systems Division of NASA Ames Research Center is developing many prototype applications using model-based reasoning, data mining and simulation, working with Mission Control through the ISHM Testbed and Prototypes Project. This paper will briefly describe information technology that supports current mission management practice, and will extend this to a vision for future mission control workflow incorporating new ISHM applications. It will describe ISHM applications currently under development at NASA and will define technical approaches for implementing our vision of future human exploration mission management incorporating artificial intelligence and distributed web service architectures using specific examples. Several prototypes are under development, each highlighting a different computational approach. The ISStrider application allows in-depth analysis of Caution and Warning (C&W) events by correlating real-time telemetry with the logical fault trees used to define off-nominal events. The application uses live telemetry data and the Livingstone diagnostic inference engine to display the specific parameters and fault trees that generated the C&W event, allowing a flight controller to identify the root cause of the event from thousands of possibilities by simply navigating animated fault tree models on their workstation. SimStation models the functional power flow for the ISS Electrical Power System and can predict power balance for nominal and off-nominal conditions. SimStation uses realtime telemetry data to keep detailed computational physics models synchronized with actual ISS power system state. In the event of failure, the application can then rapidly diagnose root cause, predict future resource levels and even correlate technical documents relevant to the specific failure. These advanced computational models will allow better insight and more precise control of ISS subsystems, increasing safety margins by speeding up anomaly resolution and reducing,engineering team effort and cost. This technology will make operating ISS more efficient and is directly applicable to next-generation exploration missions and Crew Exploration Vehicles.
Survivable algorithms and redundancy management in NASA's distributed computing systems
NASA Technical Reports Server (NTRS)
Malek, Miroslaw
1992-01-01
The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonrexa, K.; Aziz, A.; Solomon, G.J.
1995-10-01
The Dulang field, discovered in 1981, is a major oil filed located offshore Malaysia in the Malay Basin. The Dulang Unit Area constitutes the central part of this exceedingly heterogeneous field. The Unit Area consists of 19 stacked shaly sandstone reservoirs which are divided into about 90 compartments with multiple fluid contacts owing to severe faulting. Current estimated put the Original-Oil-In-Place (OOIP) in the neighborhood of 700 million stock tank barrels (MMSTB). Production commenced in March 1991 and the current production is more than 50,000 barrels of oil per day (BOPD). In addition to other more conventional means, reservoir simulationmore » has been employed form the very start as a vital component of the overall strategy to develop and manage this challenging field. More than 10 modeling studies have been completed by Petronas Carigali Sdn. Bhd. (Carigali) at various times during the short life of this field thus far. To add to that, Esso Production Malaysia Inc. (EPMI) has simultaneously conducted a number of independent studies. These studies have dealt with undersaturated compartments as well as those with small and large gas caps. They have paved the way for improved reservoir characterization, optimum development planning and prudent production practices. This paper discusses the modeling approaches and highlights the crucial role these studies have played on an ongoing basis in the development and management of the complexly-faulted, multi-reservoir Dulang Unit Area.« less
2009-09-01
this information supports the decison - making process as it is applied to the management of risk. 2. Operational Risk Operational risk is the threat... reasonability . However, to make a software system fault tolerant, the system needs to recognize and fix a system state condition. To detect a fault, a fault...Tracking ..........................................51 C. DECISION- MAKING PROCESS................................................................51 1. Risk
A fuzzy decision tree for fault classification.
Zio, Enrico; Baraldi, Piero; Popescu, Irina C
2008-02-01
In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.
Breaking down barriers in cooperative fault management: Temporal and functional information displays
NASA Technical Reports Server (NTRS)
Potter, Scott S.; Woods, David D.
1994-01-01
At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Monitoring and decision making by people in man machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.
1979-01-01
The analysis of human monitoring and decision making behavior as well as its modeling are described. Classic and optimal control theoretical, monitoring models are surveyed. The relationship between attention allocation and eye movements is discussed. As an example of applications, the evaluation of predictor displays by means of the optimal control model is explained. Fault detection involving continuous signals and decision making behavior of a human operator engaged in fault diagnosis during different operation and maintenance situations are illustrated. Computer aided decision making is considered as a queueing problem. It is shown to what extent computer aids can be based on the state of human activity as measured by psychophysiological quantities. Finally, management information systems for different application areas are mentioned. The possibilities of mathematical modeling of human behavior in complex man machine systems are also critically assessed.
The Fault Block Model: A novel approach for faulted gas reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ursin, J.R.; Moerkeseth, P.O.
1994-12-31
The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less
Adaptive and technology-independent architecture for fault-tolerant distributed AAL solutions.
Schmidt, Michael; Obermaisser, Roman
2018-04-01
Today's architectures for Ambient Assisted Living (AAL) must cope with a variety of challenges like flawless sensor integration and time synchronization (e.g. for sensor data fusion) while abstracting from the underlying technologies at the same time. Furthermore, an architecture for AAL must be capable to manage distributed application scenarios in order to support elderly people in all situations of their everyday life. This encompasses not just life at home but in particular the mobility of elderly people (e.g. when going for a walk or having sports) as well. Within this paper we will introduce a novel architecture for distributed AAL solutions whose design follows a modern Microservices approach by providing small core services instead of a monolithic application framework. The architecture comprises core services for sensor integration, and service discovery while supporting several communication models (periodic, sporadic, streaming). We extend the state-of-the-art by introducing a fault-tolerance model for our architecture on the basis of a fault-hypothesis describing the fault-containment regions (FCRs) with their respective failure modes and failure rates in order to support safety-critical AAL applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fault tolerant operation of switched reluctance machine
NASA Astrophysics Data System (ADS)
Wang, Wei
The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and experiments. With the proposed optimal waveform, torque production is greatly improved under the same Root Mean Square (RMS) current constraint. Additionally, position sensorless operation methods under phase faults are investigated to account for the combination of physical position sensor and phase winding faults. A comprehensive solution for position sensorless operation under single and multiple phases fault are proposed and validated through experiments. Continuous position sensorless operation with seamless transition between various numbers of phase fault is achieved.
Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2003-10-01
A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be investigated. Significant leakage perpendicular to the fault strike (in the case of a young fault), or cracks hydraulically linking the fault core to the damaged zone (for a mature fault) are probable mechanisms for keeping the faults strong and might play a significant role in modulating fault pore pressures. Therefore, fault-normal hydraulic properties of fault zones should be a future focus of field and numerical experiments.
Lessons Learned in the Livingstone 2 on Earth Observing One Flight Experiment
NASA Technical Reports Server (NTRS)
Hayden, Sandra C.; Sweet, Adam J.; Shulman, Seth
2005-01-01
The Livingstone 2 (L2) model-based diagnosis software is a reusable diagnostic tool for monitoring complex systems. In 2004, L2 was integrated with the JPL Autonomous Sciencecraft Experiment (ASE) and deployed on-board Goddard's Earth Observing One (EO-1) remote sensing satellite, to monitor and diagnose the EO-1 space science instruments and imaging sequence. This paper reports on lessons learned from this flight experiment. The goals for this experiment, including validation of minimum success criteria and of a series of diagnostic scenarios, have all been successfully net. Long-term operations in space are on-going, as a test of the maturity of the system, with L2 performance remaining flawless. L2 has demonstrated the ability to track the state of the system during nominal operations, detect simulated abnormalities in operations and isolate failures to their root cause fault. Specific advances demonstrated include diagnosis of ambiguity groups rather than a single fault candidate; hypothesis revision given new sensor evidence about the state of the system; and the capability to check for faults in a dynamic system without having to wait until the system is quiescent. The major benefits of this advanced health management technology are to increase mission duration and reliability through intelligent fault protection, and robust autonomous operations with reduced dependency on supervisory operations from Earth. The work-load for operators will be reduced by telemetry of processed state-of-health information rather than raw data. The long-term vision is that of making diagnosis available to the onboard planner or executive, allowing autonomy software to re-plan in order to work around known component failures. For a system that is expected to evolve substantially over its lifetime, as for the International Space Station, the model-based approach has definite advantages over rule-based expert systems and limit-checking fault protection systems, as these do not scale well. The model-based approach facilitates reuse of the L2 diagnostic software; only the model of the system to be diagnosed and telemetry monitoring software has to be rebuilt for a new system or expanded for a growing system. The hierarchical L2 model supports modularity and expendability, and as such is suitable solution for integrated system health management as envisioned for systems-of-systems.
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.
2018-06-01
Induced seismicity typically arises from the progressive activation of recently inactive geological faults by anthropogenic activity. Faults are mechanically and geometrically heterogeneous, so their extremes of stress and strength govern the initial evolution of induced seismicity. We derive a statistical model of Coulomb stress failures and associated aftershocks within the tail of the distribution of fault stress and strength variations to show initial induced seismicity rates will increase as an exponential function of induced stress. Our model provides operational forecasts consistent with the observed space-time-magnitude distribution of earthquakes induced by gas production from the Groningen field in the Netherlands. These probabilistic forecasts also match the observed changes in seismicity following a significant and sustained decrease in gas production rates designed to reduce seismic hazard and risk. This forecast capability allows reliable assessment of alternative control options to better inform future induced seismic risk management decisions.
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Propulsion Health Monitoring for Enhanced Safety
NASA Technical Reports Server (NTRS)
Butz, Mark G.; Rodriguez, Hector M.
2003-01-01
This report presents the results of the NASA contract Propulsion System Health Management for Enhanced Safety performed by General Electric Aircraft Engines (GE AE), General Electric Global Research (GE GR), and Pennsylvania State University Applied Research Laboratory (PSU ARL) under the NASA Aviation Safety Program. This activity supports the overall goal of enhanced civil aviation safety through a reduction in the occurrence of safety-significant propulsion system malfunctions. Specific objectives are to develop and demonstrate vibration diagnostics techniques for the on-line detection of turbine rotor disk cracks, and model-based fault tolerant control techniques for the prevention and mitigation of in-flight engine shutdown, surge/stall, and flameout events. The disk crack detection work was performed by GE GR which focused on a radial-mode vibration monitoring technique, and PSU ARL which focused on a torsional-mode vibration monitoring technique. GE AE performed the Model-Based Fault Tolerant Control work which focused on the development of analytical techniques for detecting, isolating, and accommodating gas-path faults.
Analysis of typical fault-tolerant architectures using HARP
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl
1987-01-01
Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
Space Station Freedom ECLSS: A step toward autonomous regenerative life support systems
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.
1990-01-01
The Environmental Control and Life Support System (ECLSS) is a Freedom Station distributed system with inherent applicability to extensive automation primarily due to its comparatively long control system latencies. These allow longer contemplation times in which to form a more intelligent control strategy and to prevent and diagnose faults. The regenerative nature of the Space Station Freedom ECLSS will contribute closed loop complexities never before encountered in life support systems. A study to determine ECLSS automation approaches has been completed. The ECLSS baseline software and system processes could be augmented with more advanced fault management and regenerative control systems for a more autonomous evolutionary system, as well as serving as a firm foundation for future regenerative life support systems. Emerging advanced software technology and tools can be successfully applied to fault management, but a fully automated life support system will require research and development of regenerative control systems and models. The baseline Environmental Control and Life Support System utilizes ground tests in development of batch chemical and microbial control processes. Long duration regenerative life support systems will require more active chemical and microbial feedback control systems which, in turn, will require advancements in regenerative life support models and tools. These models can be verified using ground and on orbit life support test and operational data, and used in the engineering analysis of proposed intelligent instrumentation feedback and flexible process control technologies for future autonomous regenerative life support systems, including the evolutionary Space Station Freedom ECLSS.
Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft
NASA Technical Reports Server (NTRS)
Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz
2008-01-01
In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.
Operations management system advanced automation: Fault detection isolation and recovery prototyping
NASA Technical Reports Server (NTRS)
Hanson, Matt
1990-01-01
The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.
Health management and controls for Earth-to-orbit propulsion systems
NASA Astrophysics Data System (ADS)
Bickford, R. L.
1995-03-01
Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.
Earthquake Hazard and Risk in Alaska
NASA Astrophysics Data System (ADS)
Black Porto, N.; Nyst, M.
2014-12-01
Alaska is one of the most seismically active and tectonically diverse regions in the United States. To examine risk, we have updated the seismic hazard model in Alaska. The current RMS Alaska hazard model is based on the 2007 probabilistic seismic hazard maps for Alaska (Wesson et al., 2007; Boyd et al., 2007). The 2015 RMS model will update several key source parameters, including: extending the earthquake catalog, implementing a new set of crustal faults, updating the subduction zone geometry and reoccurrence rate. First, we extend the earthquake catalog to 2013; decluster the catalog, and compute new background rates. We then create a crustal fault model, based on the Alaska 2012 fault and fold database. This new model increased the number of crustal faults from ten in 2007, to 91 faults in the 2015 model. This includes the addition of: the western Denali, Cook Inlet folds near Anchorage, and thrust faults near Fairbanks. Previously the subduction zone was modeled at a uniform depth. In this update, we model the intraslab as a series of deep stepping events. We also use the best available data, such as Slab 1.0, to update the geometry of the subduction zone. The city of Anchorage represents 80% of the risk exposure in Alaska. In the 2007 model, the hazard in Alaska was dominated by the frequent rate of magnitude 7 to 8 events (Gutenberg-Richter distribution), and large magnitude 8+ events had a low reoccurrence rate (Characteristic) and therefore didn't contribute as highly to the overall risk. We will review these reoccurrence rates, and will present the results and impact to Anchorage. We will compare our hazard update to the 2007 USGS hazard map, and discuss the changes and drivers for these changes. Finally, we will examine the impact model changes have on Alaska earthquake risk. Consider risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance: the Trans-Alaska pipeline, industrial facilities in Valdez, and typical residential wood buildings in Anchorage, Fairbanks and Juneau.
Earthquake Hazard and Risk in New Zealand
NASA Astrophysics Data System (ADS)
Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.
2014-12-01
To quantify risk in New Zealand we examine the impact of updating the seismic hazard model. The previous RMS New Zealand hazard model is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS model, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence rate and implementing new background rates and a robust methodology for modeling background earthquake sources. The number of crustal faults has increased by over 200 from the 2002 model, to the 2012 model which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated model with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard model changes have on New Zealand earthquake risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is modeled by averaged Mw-frequency relationships on area sources. Thus small changes to the background rates can have a large impact on the risk profile for the area. Wellington, another area of high exposure is particularly sensitive to how the Hikurangi subduction zone and the Wellington fault are modeled. Minor changes on these sources have substantial impacts for the risk profile of the city and the country at large.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
ARGES: an Expert System for Fault Diagnosis Within Space-Based ECLS Systems
NASA Technical Reports Server (NTRS)
Pachura, David W.; Suleiman, Salem A.; Mendler, Andrew P.
1988-01-01
ARGES (Atmospheric Revitalization Group Expert System) is a demonstration prototype expert system for fault management for the Solid Amine, Water Desorbed (SAWD) CO2 removal assembly, associated with the Environmental Control and Life Support (ECLS) System. ARGES monitors and reduces data in real time from either the SAWD controller or a simulation of the SAWD assembly. It can detect gradual degradations or predict failures. This allows graceful shutdown and scheduled maintenance, which reduces crew maintenance overhead. Status and fault information is presented in a user interface that simulates what would be seen by a crewperson. The user interface employs animated color graphics and an object oriented approach to provide detailed status information, fault identification, and explanation of reasoning in a rapidly assimulated manner. In addition, ARGES recommends possible courses of action for predicted and actual faults. ARGES is seen as a forerunner of AI-based fault management systems for manned space systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumsdaine, Andrew
2013-03-08
The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less
Analytical concepts for health management systems of liquid rocket engines
NASA Technical Reports Server (NTRS)
Williams, Richard; Tulpule, Sharayu; Hawman, Michael
1990-01-01
Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
Managing Fault Management Development
NASA Technical Reports Server (NTRS)
McDougal, John M.
2010-01-01
As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
Optimal Management of Redundant Control Authority for Fault Tolerance
NASA Technical Reports Server (NTRS)
Wu, N. Eva; Ju, Jianhong
2000-01-01
This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.
A Simplified Model for Multiphase Leakage through Faults with Applications for CO2 Storage
NASA Astrophysics Data System (ADS)
Watson, F. E.; Doster, F.
2017-12-01
In the context of geological CO2 storage, faults in the subsurface could affect storage security by acting as high permeability pathways which allow CO2 to flow upwards and away from the storage formation. To assess the likelihood of leakage through faults and the impacts faults might have on storage security numerical models are required. However, faults are complex geological features, usually consisting of a fault core surrounded by a highly fractured damage zone. A direct representation of these in a numerical model would require very fine grid resolution and would be computationally expensive. Here, we present the development of a reduced complexity model for fault flow using the vertically integrated formulation. This model captures the main features of the flow but does not require us to resolve the vertical dimension, nor the fault in the horizontal dimension, explicitly. It is thus less computationally expensive than full resolution models. Consequently, we can quickly model many realisations for parameter uncertainty studies of CO2 injection into faulted reservoirs. We develop the model based on explicitly simulating local 3D representations of faults for characteristic scenarios using the Matlab Reservoir Simulation Toolbox (MRST). We have assessed the impact of variables such as fault geometry, porosity and permeability on multiphase leakage rates.
A Cryogenic Fluid System Simulation in Support of Integrated Systems Health Management
NASA Technical Reports Server (NTRS)
Barber, John P.; Johnston, Kyle B.; Daigle, Matthew
2013-01-01
Simulations serve as important tools throughout the design and operation of engineering systems. In the context of sys-tems health management, simulations serve many uses. For one, the underlying physical models can be used by model-based health management tools to develop diagnostic and prognostic models. These simulations should incorporate both nominal and faulty behavior with the ability to inject various faults into the system. Such simulations can there-fore be used for operator training, for both nominal and faulty situations, as well as for developing and prototyping health management algorithms. In this paper, we describe a methodology for building such simulations. We discuss the design decisions and tools used to build a simulation of a cryogenic fluid test bed, and how it serves as a core technology for systems health management development and maturation.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
NASA Astrophysics Data System (ADS)
Zeng, Yajun; Skibniewski, Miroslaw J.
2013-08-01
Enterprise resource planning (ERP) system implementations are often characterised with large capital outlay, long implementation duration, and high risk of failure. In order to avoid ERP implementation failure and realise the benefits of the system, sound risk management is the key. This paper proposes a probabilistic risk assessment approach for ERP system implementation projects based on fault tree analysis, which models the relationship between ERP system components and specific risk factors. Unlike traditional risk management approaches that have been mostly focused on meeting project budget and schedule objectives, the proposed approach intends to address the risks that may cause ERP system usage failure. The approach can be used to identify the root causes of ERP system implementation usage failure and quantify the impact of critical component failures or critical risk events in the implementation process.
Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vivek Agarwal; Nancy J. Lybeck; Randall Bickford
2014-06-01
Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe themore » distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less
Automatic translation of digraph to fault-tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.
1992-01-01
The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.
Integration of On-Line and Off-Line Diagnostic Algorithms for Aircraft Engine Health Management
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2007-01-01
This paper investigates the integration of on-line and off-line diagnostic algorithms for aircraft gas turbine engines. The on-line diagnostic algorithm is designed for in-flight fault detection. It continuously monitors engine outputs for anomalous signatures induced by faults. The off-line diagnostic algorithm is designed to track engine health degradation over the lifetime of an engine. It estimates engine health degradation periodically over the course of the engine s life. The estimate generated by the off-line algorithm is used to update the on-line algorithm. Through this integration, the on-line algorithm becomes aware of engine health degradation, and its effectiveness to detect faults can be maintained while the engine continues to degrade. The benefit of this integration is investigated in a simulation environment using a nonlinear engine model.
Fitzenz, D.D.; Miller, S.A.
2004-01-01
Understanding the stress field surrounding and driving active fault systems is an important component of mechanistic seismic hazard assessment. We develop and present results from a time-forward three-dimensional (3-D) model of the San Andreas fault system near its Big Bend in southern California. The model boundary conditions are assessed by comparing model and observed tectonic regimes. The model of earthquake generation along two fault segments is used to target measurable properties (e.g., stress orientations, heat flow) that may allow inferences on the stress state on the faults. It is a quasi-static model, where GPS-constrained tectonic loading drives faults modeled as mostly sealed viscoelastic bodies embedded in an elastic half-space subjected to compaction and shear creep. A transpressive tectonic regime develops southwest of the model bend as a result of the tectonic loading and migrates toward the bend because of fault slip. The strength of the model faults is assessed on the basis of stress orientations, stress drop, and overpressures, showing a departure in the behavior of 3-D finite faults compared to models of 1-D or homogeneous infinite faults. At a smaller scale, stress transfers from fault slip transiently induce significant perturbations in the local stress tensors (where the slip profile is very heterogeneous). These stress rotations disappear when subsequent model earthquakes smooth the slip profile. Maps of maximum absolute shear stress emphasize both that (1) future models should include a more continuous representation of the faults and (2) that hydrostatically pressured intact rock is very difficult to break when no material weakness is considered. Copyright 2004 by the American Geophysical Union.
Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation
NASA Astrophysics Data System (ADS)
Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.
2016-12-01
Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, J.
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the "shear zone." Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, Jian
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the “shear zone.” Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
NASA Technical Reports Server (NTRS)
Fitz, Rhonda; Whitman, Gerek
2016-01-01
Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IVV) Program, with Software Assurance Research Program support, extracted FM architectures across the IVV portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IVV projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappa, F.; Rutqvist, J.
2010-06-01
The interaction between mechanical deformation and fluid flow in fault zones gives rise to a host of coupled hydromechanical processes fundamental to fault instability, induced seismicity, and associated fluid migration. In this paper, we discuss these coupled processes in general and describe three modeling approaches that have been considered to analyze fluid flow and stress coupling in fault-instability processes. First, fault hydromechanical models were tested to investigate fault behavior using different mechanical modeling approaches, including slip interface and finite-thickness elements with isotropic or anisotropic elasto-plastic constitutive models. The results of this investigation showed that fault hydromechanical behavior can be appropriatelymore » represented with the least complex alternative, using a finite-thickness element and isotropic plasticity. We utilized this pragmatic approach coupled with a strain-permeability model to study hydromechanical effects on fault instability during deep underground injection of CO{sub 2}. We demonstrated how such a modeling approach can be applied to determine the likelihood of fault reactivation and to estimate the associated loss of CO{sub 2} from the injection zone. It is shown that shear-enhanced permeability initiated where the fault intersects the injection zone plays an important role in propagating fault instability and permeability enhancement through the overlying caprock.« less
Tools for Evaluating Fault Detection and Diagnostic Methods for HVAC Secondary Systems
NASA Astrophysics Data System (ADS)
Pourarian, Shokouh
Although modern buildings are using increasingly sophisticated energy management and control systems that have tremendous control and monitoring capabilities, building systems routinely fail to perform as designed. More advanced building control, operation, and automated fault detection and diagnosis (AFDD) technologies are needed to achieve the goal of net-zero energy commercial buildings. Much effort has been devoted to develop such technologies for primary heating ventilating and air conditioning (HVAC) systems, and some secondary systems. However, secondary systems, such as fan coil units and dual duct systems, although widely used in commercial, industrial, and multifamily residential buildings, have received very little attention. This research study aims at developing tools that could provide simulation capabilities to develop and evaluate advanced control, operation, and AFDD technologies for these less studied secondary systems. In this study, HVACSIM+ is selected as the simulation environment. Besides developing dynamic models for the above-mentioned secondary systems, two other issues related to the HVACSIM+ environment are also investigated. One issue is the nonlinear equation solver used in HVACSIM+ (Powell's Hybrid method in subroutine SNSQ). It has been found from several previous research projects (ASRHAE RP 825 and 1312) that SNSQ is especially unstable at the beginning of a simulation and sometimes unable to converge to a solution. Another issue is related to the zone model in the HVACSIM+ library of components. Dynamic simulation of secondary HVAC systems unavoidably requires an interacting zone model which is systematically and dynamically interacting with building surrounding. Therefore, the accuracy and reliability of the building zone model affects operational data generated by the developed dynamic tool to predict HVAC secondary systems function. The available model does not simulate the impact of direct solar radiation that enters a zone through glazing and the study of zone model is conducted in this direction to modify the existing zone model. In this research project, the following tasks are completed and summarized in this report: 1. Develop dynamic simulation models in the HVACSIM+ environment for common fan coil unit and dual duct system configurations. The developed simulation models are able to produce both fault-free and faulty operational data under a wide variety of faults and severity levels for advanced control, operation, and AFDD technology development and evaluation purposes; 2. Develop a model structure, which includes the grouping of blocks and superblocks, treatment of state variables, initial and boundary conditions, and selection of equation solver, that can simulate a dual duct system efficiently with satisfactory stability; 3. Design and conduct a comprehensive and systematic validation procedure using collected experimental data to validate the developed simulation models under both fault-free and faulty operational conditions; 4. Conduct a numerical study to compare two solution techniques: Powell's Hybrid (PH) and Levenberg-Marquardt (LM) in terms of their robustness and accuracy. 5. Modification of the thermal state of the existing building zone model in HVACSIM+ library of component. This component is revised to consider the transmitted heat through glazing as a heat source for transient building zone load prediction In this report, literature, including existing HVAC dynamic modeling environment and models, HVAC model validation methodologies, and fault modeling and validation methodologies, are reviewed. The overall methodologies used for fault free and fault model development and validation are introduced. Detailed model development and validation results for the two secondary systems, i.e., fan coil unit and dual duct system are summarized. Experimental data mostly from the Iowa Energy Center Energy Resource Station are used to validate the models developed in this project. Satisfactory model performance in both fault free and fault simulation studies is observed for all studied systems.
Health management and controls for earth to orbit propulsion systems
NASA Technical Reports Server (NTRS)
Bickford, R. L.
1992-01-01
Fault detection and isolation for advanced rocket engine controllers are discussed focusing on advanced sensing systems and software which significantly improve component failure detection for engine safety and health management. Aerojet's Space Transportation Main Engine controller for the National Launch System is the state of the art in fault tolerant engine avionics. Health management systems provide high levels of automated fault coverage and significantly improve vehicle delivered reliability and lower preflight operations costs. Key technologies, including the sensor data validation algorithms and flight capable spectrometers, have been demonstrated in ground applications and are found to be suitable for bridging programs into flight applications.
Block rotations, fault domains and crustal deformation in the western US
NASA Technical Reports Server (NTRS)
Nur, Amos
1990-01-01
The aim of the project was to develop a 3D model of crustal deformation by distributed fault sets and to test the model results in the field. In the first part of the project, Nur's 2D model (1986) was generalized to 3D. In Nur's model the frictional strength of rocks and faults of a domain provides a tight constraint on the amount of rotation that a fault set can undergo during block rotation. Domains of fault sets are commonly found in regions where the deformation is distributed across a region. The interaction of each fault set causes the fault bounded blocks to rotate. The work that has been done towards quantifying the rotation of fault sets in a 3D stress field is briefly summarized. In the second part of the project, field studies were carried out in Israel, Nevada and China. These studies combined both paleomagnetic and structural information necessary to test the block rotation model results. In accordance with the model, field studies demonstrate that faults and attending fault bounded blocks slip and rotate away from the direction of maximum compression when deformation is distributed across fault sets. Slip and rotation of fault sets may continue as long as the earth's crustal strength is not exceeded. More optimally oriented faults must form, for subsequent deformation to occur. Eventually the block rotation mechanism may create a complex pattern of intersecting generations of faults.
Current Fault Management Trends in NASA's Planetary Spacecraft
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.
2009-01-01
The key product of this three-day workshop is a NASA White Paper that documents lessons learned from previous missions, recommended best practices, and future opportunities for investments in the fault management domain. This paper summarizes the findings and recommendations that are captured in the White Paper.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Leifker, Daniel B.
1991-01-01
Current qualitative device and process models represent only the structure and behavior of physical systems. However, systems in the real world include goal-oriented activities that generally cannot be easily represented using current modeling techniques. An extension of a qualitative modeling system, known as functional modeling, which captures goal-oriented activities explicitly is proposed and how they may be used to support intelligent automation and fault management is shown.
Chip level modeling of LSI devices
NASA Technical Reports Server (NTRS)
Armstrong, J. R.
1984-01-01
The advent of Very Large Scale Integration (VLSI) technology has rendered the gate level model impractical for many simulation activities critical to the design automation process. As an alternative, an approach to the modeling of VLSI devices at the chip level is described, including the specification of modeling language constructs important to the modeling process. A model structure is presented in which models of the LSI devices are constructed as single entities. The modeling structure is two layered. The functional layer in this structure is used to model the input/output response of the LSI chip. A second layer, the fault mapping layer, is added, if fault simulations are required, in order to map the effects of hardware faults onto the functional layer. Modeling examples for each layer are presented. Fault modeling at the chip level is described. Approaches to realistic functional fault selection and defining fault coverage for functional faults are given. Application of the modeling techniques to single chip and bit slice microprocessors is discussed.
Data-Centric Situational Awareness and Management in Intelligent Power Systems
NASA Astrophysics Data System (ADS)
Dai, Xiaoxiao
The rapid development of technology and society has made the current power system a much more complicated system than ever. The request for big data based situation awareness and management becomes urgent today. In this dissertation, to respond to the grand challenge, two data-centric power system situation awareness and management approaches are proposed to address the security problems in the transmission/distribution grids and social benefits augmentation problem at the distribution-customer lever, respectively. To address the security problem in the transmission/distribution grids utilizing big data, the first approach provides a fault analysis solution based on characterization and analytics of the synchrophasor measurements. Specically, the optimal synchrophasor measurement devices selection algorithm (OSMDSA) and matching pursuit decomposition (MPD) based spatial-temporal synchrophasor data characterization method was developed to reduce data volume while preserving comprehensive information for the big data analyses. And the weighted Granger causality (WGC) method was investigated to conduct fault impact causal analysis during system disturbance for fault localization. Numerical results and comparison with other methods demonstrate the effectiveness and robustness of this analytic approach. As more social effects are becoming important considerations in power system management, the goal of situation awareness should be expanded to also include achievements in social benefits. The second approach investigates the concept and application of social energy upon the University of Denver campus grid to provide management improvement solutions for optimizing social cost. Social element--human working productivity cost, and economic element--electricity consumption cost, are both considered in the evaluation of overall social cost. Moreover, power system simulation, numerical experiments for smart building modeling, distribution level real-time pricing and social response to the pricing signals are studied for implementing the interactive artificial-physical management scheme.
Post-seismic and interseismic fault creep I: model description
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Simons, M.; Dunham, E. M.
2010-04-01
We present a model of localized, aseismic fault creep during the full interseismic period, including both transient and steady fault creep, in response to a sequence of imposed coseismic slip events and tectonic loading. We consider the behaviour of models with linear viscous, non-linear viscous, rate-dependent friction, and rate- and state-dependent friction fault rheologies. Both the transient post-seismic creep and the pattern of steady interseismic creep rates surrounding asperities depend on recent coseismic slip and fault rheologies. In these models, post-seismic fault creep is manifest as pulses of elevated creep rates that propagate from the coseismic slip, these pulses feature sharper fronts and are longer lived in models with rate-state friction compared to other models. With small characteristic slip distances in rate-state friction models, interseismic creep is similar to that in models with rate-dependent friction faults, except for the earliest periods of post-seismic creep. Our model can be used to constrain fault rheologies from geodetic observations in cases where the coseismic slip history is relatively well known. When only considering surface deformation over a short period of time, there are strong trade-offs between fault rheology and the details of the imposed coseismic slip. Geodetic observations over longer times following an earthquake will reduce these trade-offs, while simultaneous modelling of interseismic and post-seismic observations provide the strongest constraints on fault rheologies.
Fleet-Wide Prognostic and Health Management Suite: Asset Fault Signature Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vivek Agarwal; Nancy J. Lybeck; Randall Bickford
Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: (1) Diagnostic Advisor, (2) Asset Fault Signature (AFS) Database, (3) Remaining Useful Life Advisor, and (4) Remaining Useful Life Database. The paper focuses on the AFS Database of the FW-PHM Suite, which is used to catalog asset fault signatures. A fault signature is a structured representation ofmore » the information that an expert would use to first detect and then verify the occurrence of a specific type of fault. The fault signatures developed to assess the health status of generator step-up transformers are described in the paper. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less
Coordinated Fault Tolerance for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack; Bosilca, George; et al.
2013-04-08
Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.
NASA Astrophysics Data System (ADS)
Cooke, M. L.; Fattaruso, L.; Dorsey, R. J.; Housen, B. A.
2015-12-01
Between ~1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that growth of the San Jacinto fault led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of off-fault deformation and potential incipient faulting. These patterns support the notion of north-to-south propagation of the San Jacinto fault during its initiation. The results of the present-day model are compared with microseismicity focal mechanisms to provide additional insight into the patterns of off-fault deformation within the southern San Andreas fault system.
The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan
NASA Astrophysics Data System (ADS)
Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.
2011-12-01
Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.
NASA Astrophysics Data System (ADS)
Zuza, A. V.; Yin, A.; Lin, J. C.
2015-12-01
Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike-slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).
NASA Astrophysics Data System (ADS)
Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.
2011-12-01
The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the Kusumoto et al. (2001) and no characteristic gravity change pattern. The Quantitative estimation is further problem.
A Classfication of Management Teachers
ERIC Educational Resources Information Center
Walker, Bob
1974-01-01
There are many classifications of management teachers today. Each has his style, successes, and faults. Some of the more prominent are: the company man, the mamagement technician, the man of principle, the evangelist, and the entrepreneur. A mixture of these classifications would be ideal since each by itself has its faults. (DS)
NASA Astrophysics Data System (ADS)
Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.
2015-12-01
Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
3D Model of the Tuscarora Geothermal Area
Faulds, James E.
2013-12-31
The Tuscarora geothermal system sits within a ~15 km wide left-step in a major west-dipping range-bounding normal fault system. The step over is defined by the Independence Mountains fault zone and the Bull Runs Mountains fault zone which overlap along strike. Strain is transferred between these major fault segments via and array of northerly striking normal faults with offsets of 10s to 100s of meters and strike lengths of less than 5 km. These faults within the step over are one to two orders of magnitude smaller than the range-bounding fault zones between which they reside. Faults within the broad step define an anticlinal accommodation zone wherein east-dipping faults mainly occupy western half of the accommodation zone and west-dipping faults lie in the eastern half of the accommodation zone. The 3D model of Tuscarora encompasses 70 small-offset normal faults that define the accommodation zone and a portion of the Independence Mountains fault zone, which dips beneath the geothermal field. The geothermal system resides in the axial part of the accommodation, straddling the two fault dip domains. The Tuscarora 3D geologic model consists of 10 stratigraphic units. Unconsolidated Quaternary alluvium has eroded down into bedrock units, the youngest and stratigraphically highest bedrock units are middle Miocene rhyolite and dacite flows regionally correlated with the Jarbidge Rhyolite and modeled with uniform cumulative thickness of ~350 m. Underlying these lava flows are Eocene volcanic rocks of the Big Cottonwood Canyon caldera. These units are modeled as intracaldera deposits, including domes, flows, and thick ash deposits that change in thickness and locally pinch out. The Paleozoic basement of consists metasedimenary and metavolcanic rocks, dominated by argillite, siltstone, limestone, quartzite, and metabasalt of the Schoonover and Snow Canyon Formations. Paleozoic formations are lumped in a single basement unit in the model. Fault blocks in the eastern portion of the model are tilted 5-30 degrees toward the Independence Mountains fault zone. Fault blocks in the western portion of the model are tilted toward steeply east-dipping normal faults. These opposing fault block dips define a shallow extensional anticline. Geothermal production is from 4 closely-spaced wells, that exploit a west-dipping, NNE-striking fault zone near the axial part of the accommodation zone.
NASA Astrophysics Data System (ADS)
Fattaruso, Laura A.; Cooke, Michele L.; Dorsey, Rebecca J.; Housen, Bernard A.
2016-12-01
Between 1.5 and 1.1 Ma, the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault zone and termination of slip on the extensional West Salton detachment fault. The southern San Andreas fault itself has also evolved since this time, with several shifts in activity among fault strands within San Gorgonio Pass. We use three-dimensional mechanical Boundary Element Method models to investigate the impact of these changes to the fault network on deformation patterns. A series of snapshot models of the succession of active fault geometries explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault, initiation of the San Jacinto fault zone, and shifts in activity of the San Andreas fault. Interpreted changes to uplift patterns are well matched by model results. These results support the idea that initiation and growth of the San Jacinto fault zone led to increased uplift rates in the San Gabriel Mountains and decreased uplift rates in the San Bernardino Mountains. Comparison of model results for vertical-axis rotation to data from paleomagnetic studies reveals a good match to local rotation patterns in the Mecca Hills and Borrego Badlands. We explore the mechanical efficiency at each step in the modeled fault evolution, and find an overall trend toward increased efficiency through time. Strain energy density patterns are used to identify regions of incipient faulting, and support the notion of north-to-south propagation of the San Jacinto fault during its initiation.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
Nearly frictionless faulting by unclamping in long-term interaction models
Parsons, T.
2002-01-01
In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
Fault Modeling of Extreme Scale Applications Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Fault Modeling of Extreme Scale Applications Using Machine Learning
Vishnu, Abhinav; Dam, Hubertus van; Tallent, Nathan R.; ...
2016-05-01
Faults are commonplace in large scale systems. These systems experience a variety of faults such as transient, permanent and intermittent. Multi-bit faults are typically not corrected by the hardware resulting in an error. Here, this paper attempts to answer an important question: Given a multi-bit fault in main memory, will it result in an application error — and hence a recovery algorithm should be invoked — or can it be safely ignored? We propose an application fault modeling methodology to answer this question. Given a fault signature (a set of attributes comprising of system and application state), we use machinemore » learning to create a model which predicts whether a multibit permanent/transient main memory fault will likely result in error. We present the design elements such as the fault injection methodology for covering important data structures, the application and system attributes which should be used for learning the model, the supervised learning algorithms (and potentially ensembles), and important metrics. Lastly, we use three applications — NWChem, LULESH and SVM — as examples for demonstrating the effectiveness of the proposed fault modeling methodology.« less
Dynamic modeling of gearbox faults: A review
NASA Astrophysics Data System (ADS)
Liang, Xihui; Zuo, Ming J.; Feng, Zhipeng
2018-01-01
Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2012-01-01
This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.
Minimizing student’s faults in determining the design of experiment through inquiry-based learning
NASA Astrophysics Data System (ADS)
Nilakusmawati, D. P. E.; Susilawati, M.
2017-10-01
The purpose of this study were to describe the used of inquiry method in an effort to minimize student’s fault in designing an experiment and to determine the effectiveness of the implementation of the inquiry method in minimizing student’s faults in designing experiments on subjects experimental design. This type of research is action research participants, with a model of action research design. The data source were students of the fifth semester who took a subject of experimental design at Mathematics Department, Faculty of Mathematics and Natural Sciences, Udayana University. Data was collected through tests, interviews, and observations. The hypothesis was tested by t-test. The result showed that the implementation of inquiry methods to minimize of students fault in designing experiments, analyzing experimental data, and interpret them in cycle 1 students can reduce fault by an average of 10.5%. While implementation in Cycle 2, students managed to reduce fault by an average of 8.78%. Based on t-test results can be concluded that the inquiry method effectively used to minimize of student’s fault in designing experiments, analyzing experimental data, and interpreting them. The nature of the teaching materials on subject of Experimental Design that demand the ability of students to think in a systematic, logical, and critical in analyzing the data and interpret the test cases makes the implementation of this inquiry become the proper method. In addition, utilization learning tool, in this case the teaching materials and the students worksheet is one of the factors that makes this inquiry method effectively minimizes of student’s fault when designing experiments.
Improving Multiple Fault Diagnosability using Possible Conflicts
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2012-01-01
Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.
Modeling the data management system of Space Station Freedom with DEPEND
NASA Technical Reports Server (NTRS)
Olson, Daniel P.; Iyer, Ravishankar K.; Boyd, Mark A.
1993-01-01
Some of the features and capabilities of the DEPEND simulation-based modeling tool are described. A study of a 1553B local bus subsystem of the Space Station Freedom Data Management System (SSF DMS) is used to illustrate some types of system behavior that can be important to reliability and performance evaluations of this type of spacecraft. A DEPEND model of the subsystem is used to illustrate how these types of system behavior can be modeled, and shows what kinds of engineering and design questions can be answered through the use of these modeling techniques. DEPEND's process-based simulation environment is shown to provide a flexible method for modeling complex interactions between hardware and software elements of a fault-tolerant computing system.
Real-Time Model-Based Leak-Through Detection within Cryogenic Flow Systems
NASA Technical Reports Server (NTRS)
Walker, M.; Figueroa, F.
2015-01-01
The timely detection of leaks within cryogenic fuel replenishment systems is of significant importance to operators on account of the safety and economic impacts associated with material loss and operational inefficiencies. Associated loss in control of pressure also effects the stability and ability to control the phase of cryogenic fluids during replenishment operations. Current research dedicated to providing Prognostics and Health Management (PHM) coverage of such cryogenic replenishment systems has focused on the detection of leaks to atmosphere involving relatively simple model-based diagnostic approaches that, while effective, are unable to isolate the fault to specific piping system components. The authors have extended this research to focus on the detection of leaks through closed valves that are intended to isolate sections of the piping system from the flow and pressurization of cryogenic fluids. The described approach employs model-based detection of leak-through conditions based on correlations of pressure changes across isolation valves and attempts to isolate the faults to specific valves. Implementation of this capability is enabled by knowledge and information embedded in the domain model of the system. The approach has been used effectively to detect such leak-through faults during cryogenic operational testing at the Cryogenic Testbed at NASA's Kennedy Space Center.
Fault-tolerant processing system
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L. (Inventor)
1996-01-01
A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.
Stability of faults with heterogeneous friction properties and effective normal stress
NASA Astrophysics Data System (ADS)
Luo, Yingdi; Ampuero, Jean-Paul
2018-05-01
Abundant geological, seismological and experimental evidence of the heterogeneous structure of natural faults motivates the theoretical and computational study of the mechanical behavior of heterogeneous frictional fault interfaces. Fault zones are composed of a mixture of materials with contrasting strength, which may affect the spatial variability of seismic coupling, the location of high-frequency radiation and the diversity of slip behavior observed in natural faults. To develop a quantitative understanding of the effect of strength heterogeneity on the mechanical behavior of faults, here we investigate a fault model with spatially variable frictional properties and pore pressure. Conceptually, this model may correspond to two rough surfaces in contact along discrete asperities, the space in between being filled by compressed gouge. The asperities have different permeability than the gouge matrix and may be hydraulically sealed, resulting in different pore pressure. We consider faults governed by rate-and-state friction, with mixtures of velocity-weakening and velocity-strengthening materials and contrasts of effective normal stress. We systematically study the diversity of slip behaviors generated by this model through multi-cycle simulations and linear stability analysis. The fault can be either stable without spontaneous slip transients, or unstable with spontaneous rupture. When the fault is unstable, slip can rupture either part or the entire fault. In some cases the fault alternates between these behaviors throughout multiple cycles. We determine how the fault behavior is controlled by the proportion of velocity-weakening and velocity-strengthening materials, their relative strength and other frictional properties. We also develop, through heuristic approximations, closed-form equations to predict the stability of slip on heterogeneous faults. Our study shows that a fault model with heterogeneous materials and pore pressure contrasts is a viable framework to reproduce the full spectrum of fault behaviors observed in natural faults: from fast earthquakes, to slow transients, to stable sliding. In particular, this model constitutes a building block for models of episodic tremor and slow slip events.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Aagaard, B. T.; Heaton, T. H.
2001-12-01
It has been hypothesized (Brune, 1996) that teleseismic inversions may underestimate the moment of shallow thrust fault earthquakes if energy becomes trapped in the hanging wall of the fault, i.e. if the fault boundary becomes opaque. We address this by creating and analyzing synthetic P and SH seismograms for a variety of friction models. There are a total of five models: (1) crack model (slip weakening) with instantaneous healing (2) crack model without healing (3) crack model with zero sliding friction (4) pulse model (slip and rate weakening) (5) prescribed model (Haskell-like rupture with the same final slip and peak slip-rate as model 4). Models 1-4 are all dynamic models where fault friction laws determine the rupture history. This allows feedback between the ongoing rupture and waves from the beginning of the rupture that hit the surface and reflect downwards. Hence, models 1-4 can exhibit opaque fault characteristics. Model 5, a prescribed rupture, allows for no interaction between the rupture and reflected waves, therefore, it is a transparent fault. We first produce source time functions for the different friction models by rupturing shallow thrust faults in 3-D dynamic finite-element simulations. The source time functions are used as point dislocations in a teleseismic body-wave code. We examine the P and SH waves for different azimuths and epicentral distances. The peak P and S first arrival displacement amplitudes for the crack, crack with healing and pulse models are all very similar. These dynamic models with opaque faults produce smaller peak P and S first arrivals than the prescribed, transparent fault. For example, a fault with strike = 90 degrees, azimuth = 45 degrees has P arrivals smaller by about 30% and S arrivals smaller by about 15%. The only dynamic model that doesn't fit this pattern is the crack model with zero sliding friction. It oscillates around its equilibrium position; therefore, it overshoots and yields an excessively large peak first arrival. In general, it appears that the dynamic, opaque faults have smaller peak teleseismic displacements that would lead to lower moment estimates by a modest amount.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Woods, David D.; Potter, Scott S.; Johannesen, Leila; Holloway, Matthew; Forbus, Kenneth D.
1991-01-01
Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design.
NASA Astrophysics Data System (ADS)
Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette
2016-04-01
Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain fault length and thus tip behaviour through time. We conclude that rapid length establishment prior to displacement accumulation may be more common than is typically assumed, thus challenging the well-established, widely cited and perhaps overused, isolated fault model.
NASA Astrophysics Data System (ADS)
Zhang, Yanhua; Clennell, Michael B.; Delle Piane, Claudio; Ahmed, Shakil; Sarout, Joel
2016-12-01
This generic 2D elastic-plastic modelling investigated the reactivation of a small isolated and critically-stressed fault in carbonate rocks at a reservoir depth level for fluid depletion and normal-faulting stress conditions. The model properties and boundary conditions are based on field and laboratory experimental data from a carbonate reservoir. The results show that a pore pressure perturbation of -25 MPa by depletion can lead to the reactivation of the fault and parts of the surrounding damage zones, producing normal-faulting downthrows and strain localization. The mechanism triggering fault reactivation in a carbonate field is the increase of shear stresses with pore-pressure reduction, due to the decrease of the absolute horizontal stress, which leads to an expanded Mohr's circle and mechanical failure, consistent with the predictions of previous poroelastic models. Two scenarios for fault and damage-zone permeability development are explored: (1) large permeability enhancement of a sealing fault upon reactivation, and (2) fault and damage zone permeability development governed by effective mean stress. In the first scenario, the fault becomes highly permeable to across- and along-fault fluid transport, removing local pore pressure highs/lows arising from the presence of the initially sealing fault. In the second scenario, reactivation induces small permeability enhancement in the fault and parts of damage zones, followed by small post-reactivation permeability reduction. Such permeability changes do not appear to change the original flow capacity of the fault or modify the fluid flow velocity fields dramatically.
Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models
NASA Astrophysics Data System (ADS)
Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio
2017-04-01
The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared precut models with isotropic models to evaluate the trends of variability. Our results indicate that the discontinuities are reactivated especially when the tip of the newly-formed fault is either below or connected to them. During the stage of maximum activity along the precut, the faults slow down or even stop their propagation. The fault propagation systematically resumes when the angle between the fault and the precut is about 90° (critical angle); only during this stage the fault crosses the precut. The reactivation of the discontinuities induces an increase of the apical angle of the fault-related fold and produces wider limbs compared to the isotropic reference experiments.
NASA Astrophysics Data System (ADS)
Madden, E. H.; McBeck, J.; Cooke, M. L.
2013-12-01
Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in efficiency provided by both hard-linkage and soft-linkage to be quantified and compared. Specialized models of interactions over the past 1 Ma between the Clark and Coyote Creek faults within the San Jacinto system reveal increasing mechanical efficiency as these fault structures change over time. Alongside this increasing efficiency is an increasing likelihood for single, larger earthquakes that rupture multiple fault segments. These models reinforce the sensitivity of mechanical efficiency to both fault structure and the regional tectonic stress orientation controlled by plate motions and provide insight into how slip may have been partitioned between the San Andreas and San Jacinto systems over the past 1 Ma.
Goal-Function Tree Modeling for Systems Engineering and Fault Management
NASA Technical Reports Server (NTRS)
Johnson, Stephen B.; Breckenridge, Jonathan T.
2013-01-01
This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.
Wali, Behram; Khattak, Asad J; Xu, Jingjing
2018-01-01
The main objective of this study is to simultaneously investigate the degree of injury severity sustained by drivers involved in head-on collisions with respect to fault status designation. This is complicated to answer due to many issues, one of which is the potential presence of correlation between injury outcomes of drivers involved in the same head-on collision. To address this concern, we present seemingly unrelated bivariate ordered response models by analyzing the joint injury severity probability distribution of at-fault and not-at-fault drivers. Moreover, the assumption of bivariate normality of residuals and the linear form of stochastic dependence implied by such models may be unduly restrictive. To test this, Archimedean copula structures and normal mixture marginals are integrated into the joint estimation framework, which can characterize complex forms of stochastic dependencies and non-normality in residual terms. The models are estimated using 2013 Virginia police reported two-vehicle head-on collision data, where exactly one driver is at-fault. The results suggest that both at-fault and not-at-fault drivers sustained serious/fatal injuries in 8% of crashes, whereas, in 4% of the cases, the not-at-fault driver sustained a serious/fatal injury with no injury to the at-fault driver at all. Furthermore, if the at-fault driver is fatigued, apparently asleep, or has been drinking the not-at-fault driver is more likely to sustain a severe/fatal injury, controlling for other factors and potential correlations between the injury outcomes. While not-at-fault vehicle speed affects injury severity of at-fault driver, the effect is smaller than the effect of at-fault vehicle speed on at-fault injury outcome. Contrarily, and importantly, the effect of at-fault vehicle speed on injury severity of not-at-fault driver is almost equal to the effect of not-at-fault vehicle speed on injury outcome of not-at-fault driver. Compared to traditional ordered probability models, the study provides evidence that copula based bivariate models can provide more reliable estimates and richer insights. Practical implications of the results are discussed. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Pei, Yangwen; Paton, Douglas A.; Wu, Kongyou; Xie, Liujuan
2017-08-01
The application of trishear algorithm, in which deformation occurs in a triangle zone in front of a propagating fault tip, is often used to understand fault related folding. In comparison to kink-band methods, a key characteristic of trishear algorithm is that non-uniform deformation within the triangle zone allows the layer thickness and horizon length to change during deformation, which is commonly observed in natural structures. An example from the Lenghu5 fold-and-thrust belt (Qaidam Basin, Northern Tibetan Plateau) is interpreted to help understand how to employ trishear forward modelling to improve the accuracy of seismic interpretation. High resolution fieldwork data, including high-angle dips, 'dragging structures', thinning hanging-wall and thickening footwall, are used to determined best-fit trishear model to explain the deformation happened to the Lenghu5 fold-and-thrust belt. We also consider the factors that increase the complexity of trishear models, including: (a) fault-dip changes and (b) pre-existing faults. We integrate fault dip change and pre-existing faults to predict subsurface structures that are apparently under seismic resolution. The analogue analysis by trishear models indicates that the Lenghu5 fold-and-thrust belt is controlled by an upward-steepening reverse fault above a pre-existing opposite-thrusting fault in deeper subsurface. The validity of the trishear model is confirmed by the high accordance between the model and the high-resolution fieldwork. The validated trishear forward model provides geometric constraints to the faults and horizons in the seismic section, e.g., fault cutoffs and fault tip position, faults' intersecting relationship and horizon/fault cross-cutting relationship. The subsurface prediction using trishear algorithm can significantly increase the accuracy of seismic interpretation, particularly in seismic sections with low signal/noise ratio.
Embedded Multiprocessor Technology for VHSIC Insertion
NASA Technical Reports Server (NTRS)
Hayes, Paul J.
1990-01-01
Viewgraphs on embedded multiprocessor technology for VHSIC insertion are presented. The objective was to develop multiprocessor system technology providing user-selectable fault tolerance, increased throughput, and ease of application representation for concurrent operation. The approach was to develop graph management mapping theory for proper performance, model multiprocessor performance, and demonstrate performance in selected hardware systems.
Towards "realistic" fault zones in a 3D structure model of the Thuringian Basin, Germany
NASA Astrophysics Data System (ADS)
Kley, J.; Malz, A.; Donndorf, S.; Fischer, T.; Zehner, B.
2012-04-01
3D computer models of geological architecture are evolving into a standard tool for visualization and analysis. Such models typically comprise the bounding surfaces of stratigraphic layers and faults. Faults affect the continuity of aquifers and can themselves act as fluid conduits or barriers. This is one reason why a "realistic" representation of faults in 3D models is desirable. Still so, many existing models treat faults in a simplistic fashion, e.g. as vertical downward projections of fault traces observed at the surface. Besides being geologically and mechanically unreasonable, this also causes technical difficulties in the modelling workflow. Most natural faults are inclined and may change dips according to rock type or flatten into mechanically weak layers. Boreholes located close to a fault can therefore cross it at depth, resulting in stratigraphic control points allocated to the wrong block. Also, faults tend to split up into several branches, forming fault zones. Obtaining a more accurate representation of faults and fault zones is therefore challenging. We present work-in-progress from the Thuringian Basin in central Germany. The fault zone geometries are never fully constrained by data and must be extrapolated to depth. We use balancing of serial, parallel cross-sections to constrain subsurface extrapolations. The structure sections are checked for consistency by restoring them to an undeformed state. If this is possible without producing gaps or overlaps, the interpretation is considered valid (but not unique) for a single cross-section. Additional constraints are provided by comparison of adjacent cross-sections. Structures should change continuously from one section to another. Also, from the deformed and restored cross-sections we can measure the strain incurred during deformation. Strain should be compatible among the cross-sections: If at all, it should vary smoothly and systematically along a given fault zone. The stratigraphic contacts and faults in the resulting grid of parallel balanced sections are then interpolated into a gOcad model containing stratigraphic boundaries and faults as triangulated surfaces. The interpolation is also controlled by borehole data located off the sections and the surface traces of stratigraphic boundaries. We have written customized scripts to largely automatize this step, with particular attention to a seamless fit between stratigraphic surfaces and fault planes which share the same nodes and segments along their contacts. Additional attention was paid to the creation of a uniform triangulated grid with maximized angles. This ensures that uniform triangulated volumes can be created for further use in numerical flow modelling. An as yet unsolved problem is the implementation of the fault zones and their hydraulic properties in a large-scale model of the entire basin. Short-wavelength folds and subsidiary faults control which aquifers and seals are juxtaposed across the fault zones. It is impossible to include these structures in the regional model, but neglecting them would result in incorrect assessments of hydraulic links or barriers. We presently plan to test and calibrate the hydraulic properties of the fault zones in smaller, high-resolution models and then to implement geometrically simple "equivalent" fault zones with appropriate, variable transmissivities between specific aquifers.
Structural Analysis of the Pärvie Fault in Northern Scandinavia
NASA Astrophysics Data System (ADS)
Baeckstroem, A.; Rantakokko, N.; Ask, M. V.
2011-12-01
The Pärvie fault is the largest known postglacial fault in the world with a length of about 160 km. The structure has a dominating fault scarp as its western perimeter but in several locations it is rather a system of several faults. The current fault scarps, mainly caused by reverse faulting, are on average, 10-15 m in height and are thought to have been formed during one momentous event near the end of the latest glaciation (the Weichselian, 9,500-115,000 BP ) (Lagerbäck & Sundh, 2008). This information has been learnt from studying deformation features in sediments from the latest glaciation. However, the fault is believed to have been formed as early as the Precambrian, and it has been reactivated repeatedly throughout its history. The earlier history of this fault zone is still largely unknown. Here we present a pre-study to the scientific drilling project "Drilling Active Faults in Northern Europe", that was submitted to the International Continental Scientific Drilling Program (ICDP) in 2009 (Kukkonen et al. 2010) with an ICDP-sponsored workshop in 2010 (Kukkonen et al. 2011). During this workshop a major issue to be addressed before the start of drilling was to reveal whether the fault scarps were formed by one big earthquake or by several small ones (Kukkonen et al. 2011). Initial results from a structural analysis by Riad (1990) have produced information of the latest kinematic event where it is suggested that the latest event coincides with the recent stress field, causing a transpressional effect. The geometrical model suggested for an extensive area of several fault scarps along the structure is the compressive tulip structure. In the southern part, where the fault dips steeply E, the structure is parallel to the foliation of the country rock and earlier breccias, thus indicating a dependence of earlier structures. Modelling of the stress field during the latest glaciation show that a reverse background stress field together with excess pore pressure governs the destabilization of a structure, such as the Pärvie fault, rather than the induced stresses from the weight of ice-sheet (Lund, 2005). This is a presentation of the first part of the structural analysis of the brittle structures around the Pärvie fault in order to evaluate its brittle deformation history and to attempt to constrain the paleostress fields causing these deformations. References Kukkonen, I.T., Olesen, O., Ask, M.V.S., and the PFDP Working Group, 2010. Postglacial faults in Fennoscandia: targets for scientific drilling. GFF, 132:71-81. Kukkonen, I.T., Ask, M.V.S., Olesen, O., 2011. Postglacial Fault Drilling in Northern Europe: Workshop in Skokloster, Sweden. Scientific Drilling, 11, doi:10.2204/iodp.sd.11.08.2011. Lagerbäck, R. & Sundh, M., 2008. Early Holocene faulting and paleoseismicity in northern Sweden. Geological survey of Sweden. Research paper, C 836. 80 p. Lund, B., Schmidt, P., Hieronymus, C., 2009. Stress evolution and fault stability during the Weichselian glacial cycle. Swedish Nuclear Fuel and Waste Management Co., Stockholm. TR-09-15. 106 p. Riad, L., 1990. The Pärvie fault, Northern Sweden, Uppsala University. Research report 63. 48 p
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
A systematic risk management approach employed on the CloudSat project
NASA Technical Reports Server (NTRS)
Basilio, R. R.; Plourde, K. S.; Lam, T.
2000-01-01
The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.
Modeling Coupled Processes for Multiphase Fluid Flow in Mechanically Deforming Faults
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Pike, D. Q.
2011-12-01
Modeling of coupled hydrological-mechanical processes in fault zones is critical for understanding the long-term behavior of fluids within the shallow crust. Here we utilize a previously developed cellular-automata (CA) model to define the evolution of permeability within a 2-D fault zone under compressive stress. At each time step, the CA model calculates the increase in fluid pressure within the fault at every grid cell. Pressure surpassing a critical threshold (e.g., lithostatic stress) causes a rupture in that cell, and pressure is then redistributed across the neighboring cells. The rupture can cascade through the spatial domain and continue across multiple time steps. Stress continues to increase and the size and location of rupture events are recorded until a percolating backbone of ruptured cells exists across the fault. Previous applications of this model consider uncorrelated random fields for the compressibility of the fault material. The prior focus on uncorrelated property fields is consistent with development of a number of statistical physics models including percolation processes and fracture propagation. However, geologic materials typically express spatial correlation and this can have a significant impact on the results of the pressure and permeability distributions. We model correlation of the fault material compressibility as a multiGaussian random field with a correlation length defined as the full-width at half maximum (FWHM) of the kernel used to create the field. The FWHM is varied from < 0.001 to approximately 0.47 of the domain size. The addition of spatial correlation to the compressibility significantly alters the model results including: 1) Accumulation of larger amounts of strain prior to the first rupture event; 2) Initiation of the percolating backbone at lower amounts of cumulative strain; 3) Changes in the event size distribution to a combined power-law and exponential distribution with a smaller power; and 4) Evolution of the spatial-temporal distribution of rupture event locations from a purely Poisson process to a complex pattern of clustered events with periodic patterns indicative of emergent phenomena. Switching the stress field from compressive to quiescent, or extensional, during the CA simulation results in a fault zone with a complex permeability pattern and disconnected zones of over-pressured fluid that serves as the initial conditions for simulation of capillary invasion of a separate fluid phase. We use Modified Invasion Percolation to simulate the invasion of a less dense fluid into the fault zone. Results show that the variability in fluid displacement measures caused by the heterogeneous permeability field and initial pressure conditions are significant. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000
Reliability issues in active control of large flexible space structures
NASA Technical Reports Server (NTRS)
Vandervelde, W. E.
1986-01-01
Efforts in this reporting period were centered on four research tasks: design of failure detection filters for robust performance in the presence of modeling errors, design of generalized parity relations for robust performance in the presence of modeling errors, design of failure sensitive observers using the geometric system theory of Wonham, and computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management
Prognostic and health management of active assets in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Vivek; Lybeck, Nancy; Pham, Binh T.
This study presents the development of diagnostic and prognostic capabilities for active assets in nuclear power plants (NPPs). The research was performed under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program. Idaho National Laboratory researched, developed, implemented, and demonstrated diagnostic and prognostic models for generator step-up transformers (GSUs). The Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software developed by the Electric Power Research Institute was used to perform diagnosis and prognosis. As part of the research activity, Idaho National Laboratory implemented 22 GSU diagnostic models in the Asset Fault Signature Database and twomore » wellestablished GSU prognostic models for the paper winding insulation in the Remaining Useful Life Database of the FW-PHM Suite. The implemented models along with a simulated fault data stream were used to evaluate the diagnostic and prognostic capabilities of the FW-PHM Suite. Knowledge of the operating condition of plant asset gained from diagnosis and prognosis is critical for the safe, productive, and economical long-term operation of the current fleet of NPPs. This research addresses some of the gaps in the current state of technology development and enables effective application of diagnostics and prognostics to nuclear plant assets.« less
Prognostic and health management of active assets in nuclear power plants
Agarwal, Vivek; Lybeck, Nancy; Pham, Binh T.; ...
2015-06-04
This study presents the development of diagnostic and prognostic capabilities for active assets in nuclear power plants (NPPs). The research was performed under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program. Idaho National Laboratory researched, developed, implemented, and demonstrated diagnostic and prognostic models for generator step-up transformers (GSUs). The Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software developed by the Electric Power Research Institute was used to perform diagnosis and prognosis. As part of the research activity, Idaho National Laboratory implemented 22 GSU diagnostic models in the Asset Fault Signature Database and twomore » wellestablished GSU prognostic models for the paper winding insulation in the Remaining Useful Life Database of the FW-PHM Suite. The implemented models along with a simulated fault data stream were used to evaluate the diagnostic and prognostic capabilities of the FW-PHM Suite. Knowledge of the operating condition of plant asset gained from diagnosis and prognosis is critical for the safe, productive, and economical long-term operation of the current fleet of NPPs. This research addresses some of the gaps in the current state of technology development and enables effective application of diagnostics and prognostics to nuclear plant assets.« less
Bayesian-network-based safety risk assessment for steel construction projects.
Leu, Sou-Sen; Chang, Ching-Miao
2013-05-01
There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kissling, W. M.; Villamor, P.; Ellis, S. M.; Rae, A.
2018-05-01
Present-day geothermal activity on the margins of the Ngakuru graben and evidence of fossil hydrothermal activity in the central graben suggest that a graben-wide system of permeable intersecting faults acts as the principal conduit for fluid flow to the surface. We have developed numerical models of fluid and heat flow in a regional-scale 2-D cross-section of the Ngakuru Graben. The models incorporate simplified representations of two 'end-member' fault architectures (one symmetric at depth, the other highly asymmetric) which are consistent with the surface locations and dips of the Ngakuru graben faults. The models are used to explore controls on buoyancy-driven convective fluid flow which could explain the differences between the past and present hydrothermal systems associated with these faults. The models show that the surface flows from the faults are strongly controlled by the fault permeability, the fault system architecture and the location of the heat source with respect to the faults in the graben. In particular, fault intersections at depth allow exchange of fluid between faults, and the location of the heat source on the footwall of normal faults can facilitate upflow along those faults. These controls give rise to two distinct fluid flow regimes in the fault network. The first, a regular flow regime, is characterised by a nearly unchanging pattern of fluid flow vectors within the fault network as the fault permeability evolves. In the second, complex flow regime, the surface flows depend strongly on fault permeability, and can fluctuate in an erratic manner. The direction of flow within faults can reverse in both regimes as fault permeability changes. Both flow regimes provide insights into the differences between the present-day and fossil geothermal systems in the Ngakuru graben. Hydrothermal upflow along the Paeroa fault seems to have occurred, possibly continuously, for tens of thousands of years, while upflow in other faults in the graben has switched on and off during the same period. An asymmetric graben architecture with the Paeroa being the major boundary fault will facilitate the predominant upflow along this fault. Upflow on the axial faults is more difficult to explain with this modelling. It occurs most easily with an asymmetric graben architecture and heat sources close to the graben axis (which could be associated with remnant heat from recent eruptions from Okataina Volcanic Centre). Temporal changes in upflow can also be associated with acceleration and deceleration of fault activity if this is considered a proxy for fault permeability. Other explanations for temporal variations in hydrothermal activity not explored here are different permeability on different faults, and different permeability along fault strike.
Seismic Hazard Analysis on a Complex, Interconnected Fault Network
NASA Astrophysics Data System (ADS)
Page, M. T.; Field, E. H.; Milner, K. R.
2017-12-01
In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
A footwall system of faults associated with a foreland thrust in Montana
NASA Astrophysics Data System (ADS)
Watkinson, A. J.
1993-05-01
Some recent structural geology models of faulting have promoted the idea of a rigid footwall behaviour or response under the main thrust fault, especially for fault ramps or fault-bend folds. However, a very well-exposed thrust fault in the Montana fold and thrust belt shows an intricate but well-ordered system of subsidiary minor faults in the footwall position with respect to the main thrust fault plane. Considerable shortening has occurred off the main fault in this footwall collapse zone and the distribution and style of the minor faults accord well with published patterns of aftershock foci associated with thrust faults. In detail, there appear to be geometrically self-similar fault systems from metre length down to a few centimetres. The smallest sets show both slip and dilation. The slickensides show essentially two-dimensional displacements, and three slip systems were operative—one parallel to the bedding, and two conjugate and symmetric about the bedding (acute angle of 45-50°). A reconstruction using physical analogue models suggests one possible model for the evolution and sequencing of slip of the thrust fault system.
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
Deformation associated with continental normal faults
NASA Astrophysics Data System (ADS)
Resor, Phillip G.
Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master normal fault illustrate how these secondary structures influence the deformation in ways that are similar to fault/fold geometry mapped in the western Grand Canyon. Specifically, synthetic faults amplify hanging wall bedding dips, antithetic faults reduce dips, and joints act to localize deformation. The distribution of aftershocks in the hanging wall of the Kozani-Grevena earthquake suggests that secondary structures may accommodate strains associated with slip on a master fault during postseismic deformation.
A hierarchical distributed control model for coordinating intelligent systems
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1991-01-01
A hierarchical distributed control (HDC) model for coordinating cooperative problem-solving among intelligent systems is described. The model was implemented using SOCIAL, an innovative object-oriented tool for integrating heterogeneous, distributed software systems. SOCIAL embeds applications in 'wrapper' objects called Agents, which supply predefined capabilities for distributed communication, control, data specification, and translation. The HDC model is realized in SOCIAL as a 'Manager'Agent that coordinates interactions among application Agents. The HDC Manager: indexes the capabilities of application Agents; routes request messages to suitable server Agents; and stores results in a commonly accessible 'Bulletin-Board'. This centralized control model is illustrated in a fault diagnosis application for launch operations support of the Space Shuttle fleet at NASA, Kennedy Space Center.
NASA Astrophysics Data System (ADS)
Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel
2018-07-01
Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.
NASA Astrophysics Data System (ADS)
Lee, En-Jui; Chen, Po
2017-04-01
More precise spatial descriptions of fault systems play an essential role in tectonic interpretations, deformation modeling, and seismic hazard assessments. The recent developed full-3D waveform tomography techniques provide high-resolution images and are able to image the material property differences across faults to assist the understanding of fault systems. In the updated seismic velocity model for Southern California, CVM-S4.26, many velocity gradients show consistency with surface geology and major faults defined in the Community Fault Model (CFM) (Plesch et al. 2007), which was constructed by using various geological and geophysical observations. In addition to faults in CFM, CVM-S4.26 reveals a velocity reversal mainly beneath the San Gabriel Mountain and Western Mojave Desert regions, which is correlated with the detachment structure that has also been found in other independent studies. The high-resolution tomographic images of CVM-S4.26 could assist the understanding of fault systems in Southern California and therefore benefit the development of fault models as well as other applications, such as seismic hazard analysis, tectonic reconstructions, and crustal deformation modeling.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
NASA Astrophysics Data System (ADS)
Piatyszek, E.; Voignier, P.; Graillot, D.
2000-05-01
One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.
Shell Tectonics: A Mechanical Model for Strike-slip Displacement on Europa
NASA Technical Reports Server (NTRS)
Rhoden, Alyssa Rose; Wurman, Gilead; Huff, Eric M.; Manga, Michael; Hurford, Terry A.
2012-01-01
We introduce a new mechanical model for producing tidally-driven strike-slip displacement along preexisting faults on Europa, which we call shell tectonics. This model differs from previous models of strike-slip on icy satellites by incorporating a Coulomb failure criterion, approximating a viscoelastic rheology, determining the slip direction based on the gradient of the tidal shear stress rather than its sign, and quantitatively determining the net offset over many orbits. This model allows us to predict the direction of net displacement along faults and determine relative accumulation rate of displacement. To test the shell tectonics model, we generate global predictions of slip direction and compare them with the observed global pattern of strike-slip displacement on Europa in which left-lateral faults dominate far north of the equator, right-lateral faults dominate in the far south, and near-equatorial regions display a mixture of both types of faults. The shell tectonics model reproduces this global pattern. Incorporating a small obliquity into calculations of tidal stresses, which are used as inputs to the shell tectonics model, can also explain regional differences in strike-slip fault populations. We also discuss implications for fault azimuths, fault depth, and Europa's tectonic history.
NASA Astrophysics Data System (ADS)
Ye, Jiyang; Liu, Mian
2017-08-01
In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.
Curry, Magdalena A. E.; Barnes, Jason B.; Colgan, Joseph P.
2016-01-01
Common fault growth models diverge in predicting how faults accumulate displacement and lengthen through time. A paucity of field-based data documenting the lateral component of fault growth hinders our ability to test these models and fully understand how natural fault systems evolve. Here we outline a framework for using apatite (U-Th)/He thermochronology (AHe) to quantify the along-strike growth of faults. To test our framework, we first use a transect in the normal fault-bounded Jackson Mountains in the Nevada Basin and Range Province, then apply the new framework to the adjacent Pine Forest Range. We combine new and existing cross sections with 18 new and 16 existing AHe cooling ages to determine the spatiotemporal variability in footwall exhumation and evaluate models for fault growth. Three age-elevation transects in the Pine Forest Range show that rapid exhumation began along the range-front fault between approximately 15 and 11 Ma at rates of 0.2–0.4 km/Myr, ultimately exhuming approximately 1.5–5 km. The ages of rapid exhumation identified at each transect lie within data uncertainty, indicating concomitant onset of faulting along strike. We show that even in the case of growth by fault-segment linkage, the fault would achieve its modern length within 3–4 Myr of onset. Comparison with the Jackson Mountains highlights the inadequacies of spatially limited sampling. A constant fault-length growth model is the best explanation for our thermochronology results. We advocate that low-temperature thermochronology can be further utilized to better understand and quantify fault growth with broader implications for seismic hazard assessments and the coevolution of faulting and topography.
NASA Astrophysics Data System (ADS)
Bing, Xue; Yicai, Ji
2018-06-01
In order to understand directly and analyze accurately the detected magnetotelluric (MT) data on anisotropic infinite faults, two-dimensional partial differential equations of MT fields are used to establish a model of anisotropic infinite faults using the Fourier transform method. A multi-fault model is developed to expand the one-fault model. The transverse electric mode and transverse magnetic mode analytic solutions are derived using two-infinite-fault models. The infinite integral terms of the quasi-analytic solutions are discussed. The dual-fault model is computed using the finite element method to verify the correctness of the solutions. The MT responses of isotropic and anisotropic media are calculated to analyze the response functions by different anisotropic conductivity structures. The thickness and conductivity of the media, influencing MT responses, are discussed. The analytic principles are also given. The analysis results are significant to how MT responses are perceived and to the data interpretation of the complex anisotropic infinite faults.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2008-01-01
The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1979-01-01
A viscoelastic model for deformation and stress associated with earthquakes is reported. The model consists of a rectangular dislocation (strike slip fault) in a viscoelastic layer (lithosphere) lying over a viscoelastic half space (asthenosphere). The time dependent surface stresses are analyzed. The model predicts that near the fault a significant fraction of the stress that was reduced during the earthquake is recovered by viscoelastic softening of the lithosphere. By contrast, the strain shows very little change near the fault. The model also predicts that the stress changes associated with asthenospheric flow extend over a broader region than those associated with lithospheric relaxation even though the peak value is less. The dependence of the displacements, stresses on fault parameters studied. Peak values of strain and stress drop increase with increasing fault height and decrease with fault depth. Under many circumstances postseismic strains and stresses show an increase with decreasing depth to the lithosphere-asthenosphere boundary. Values of the strain and stress at distant points from the fault increase with fault area but are relatively insensitive to fault depth.
Managing Risk to Ensure a Successful Cassini/Huygens Saturn Orbit Insertion (SOI)
NASA Technical Reports Server (NTRS)
Witkowski, Mona M.; Huh, Shin M.; Burt, John B.; Webster, Julie L.
2004-01-01
I. Design: a) S/C designed to be largely single fault tolerant; b) Operate in flight demonstrated envelope, with margin; and c) Strict compliance with requirements & flight rules. II. Test: a) Baseline, fault & stress testing using flight system testbeds (H/W & S/W); b) In-flight checkout & demos to remove first time events. III. Failure Analysis: a) Critical event driven fault tree analysis; b) Risk mitigation & development of contingencies. IV) Residual Risks: a) Accepted pre-launch waivers to Single Point Failures; b) Unavoidable risks (e.g. natural disaster). V) Mission Assurance: a) Strict process for characterization of variances (ISAs, PFRs & Waivers; b) Full time Mission Assurance Manager reports to Program Manager: 1) Independent assessment of compliance with institutional standards; 2) Oversight & risk assessment of ISAs, PFRs & Waivers etc.; and 3) Risk Management Process facilitator.
NASA Astrophysics Data System (ADS)
Glesener, G. B.; Peltzer, G.; Stubailo, I.; Cochran, E. S.; Lawrence, J. F.
2009-12-01
The Modeling and Educational Demonstrations Laboratory (MEDL) at the University of California, Los Angeles has developed a fourth version of the Elastic Rebound Strike-slip (ERS) Fault Model to be used to educate students and the general public about the process and mechanics of earthquakes from strike-slip faults. The ERS Fault Model is an interactive hands-on teaching tool which produces failure on a predefined fault embedded in an elastic medium, with adjustable normal stress. With the addition of an accelerometer sensor, called the Joy Warrior, the user can experience what it is like for a field geophysicist to collect and observe ground shaking data from an earthquake without having to experience a real earthquake. Two knobs on the ERS Fault Model control the normal and shear stress on the fault. Adjusting the normal stress knob will increase or decrease the friction on the fault. The shear stress knob displaces one side of the elastic medium parallel to the strike of the fault, resulting in changing shear stress on the fault surface. When the shear stress exceeds the threshold defined by the static friction of the fault, an earthquake on the model occurs. The accelerometer sensor then sends the data to a computer where the shaking of the model due to the sudden slip on the fault can be displayed and analyzed by the student. The experiment clearly illustrates the relationship between earthquakes and seismic waves. One of the major benefits to using the ERS Fault Model in undergraduate courses is that it helps to connect non-science students with the work of scientists. When students that are not accustomed to scientific thought are able to experience the scientific process first hand, a connection is made between the scientists and students. Connections like this might inspire a student to become a scientist, or promote the advancement of scientific research through public policy.
Model-Based Diagnostics for Propellant Loading Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.
2011-01-01
The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
NASA Astrophysics Data System (ADS)
Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.
2014-12-01
Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from projecting near-surface data down-dip, or modeled from surface strain and potential field data alone.
Nguyen, Ba Nghiep; Hou, Zhangshuan; Last, George V.; ...
2016-09-29
This work develops a three-dimensional multiscale model to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults southwest of the Kimberlina site. The model uses the STOMP-CO 2 code for flow modeling that is coupled to the ABAQUS® finite element package for geomechanical analysis. A 3D ABAQUS® finite element model is developed that contains a large number of 3D solid elements with two nearly parallel faults whose damage zones and cores are discretized using the same continuum elements. Five zones with different mineral compositions are considered: shale, sandstone, faultmore » damaged sandstone, fault damaged shale, and fault core. Rocks’ elastic properties that govern their poroelastic behavior are modeled by an Eshelby-Mori-Tanka approach (EMTA). EMTA can account for up to 15 mineral phases. The permeability of fault damage zones affected by crack density and orientations is also predicted by an EMTA formulation. A STOMP-CO 2 grid that exactly maps the ABAQUS® finite element model is built for coupled hydro-mechanical analyses. Simulations of the reservoir assuming three different crack pattern situations (including crack volume fraction and orientation) for the fault damage zones are performed to predict the potential leakage of CO 2 due to cracks that enhance the permeability of the fault damage zones. Here, the results illustrate the important effect of the crack orientation on fault permeability that can lead to substantial leakage along the fault attained by the expansion of the CO 2 plume. Potential hydraulic fracture and the tendency for the faults to slip are also examined and discussed in terms of stress distributions and geomechanical properties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Hou, Zhangshuan; Last, George V.
This work develops a three-dimensional multiscale model to analyze a complex CO 2 faulted reservoir that includes some key geological features of the San Andreas and nearby faults southwest of the Kimberlina site. The model uses the STOMP-CO 2 code for flow modeling that is coupled to the ABAQUS® finite element package for geomechanical analysis. A 3D ABAQUS® finite element model is developed that contains a large number of 3D solid elements with two nearly parallel faults whose damage zones and cores are discretized using the same continuum elements. Five zones with different mineral compositions are considered: shale, sandstone, faultmore » damaged sandstone, fault damaged shale, and fault core. Rocks’ elastic properties that govern their poroelastic behavior are modeled by an Eshelby-Mori-Tanka approach (EMTA). EMTA can account for up to 15 mineral phases. The permeability of fault damage zones affected by crack density and orientations is also predicted by an EMTA formulation. A STOMP-CO 2 grid that exactly maps the ABAQUS® finite element model is built for coupled hydro-mechanical analyses. Simulations of the reservoir assuming three different crack pattern situations (including crack volume fraction and orientation) for the fault damage zones are performed to predict the potential leakage of CO 2 due to cracks that enhance the permeability of the fault damage zones. Here, the results illustrate the important effect of the crack orientation on fault permeability that can lead to substantial leakage along the fault attained by the expansion of the CO 2 plume. Potential hydraulic fracture and the tendency for the faults to slip are also examined and discussed in terms of stress distributions and geomechanical properties.« less
Advanced cloud fault tolerance system
NASA Astrophysics Data System (ADS)
Sumangali, K.; Benny, Niketa
2017-11-01
Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.
Hybrid automated reliability predictor integrated work station (HiREL)
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1991-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.
Analytic Study of Three-Dimensional Rupture Propagation in Strike-Slip Faulting with Analogue Models
NASA Astrophysics Data System (ADS)
Chan, Pei-Chen; Chu, Sheng-Shin; Lin, Ming-Lang
2014-05-01
Strike-slip faults are high angle (or nearly vertical) fractures where the blocks have moved along strike way (nearly horizontal). Overburden soil profiles across main faults of Strike-slip faults have revealed the palm and tulip structure characteristics. McCalpin (2005) has trace rupture propagation on overburden soil surface. In this study, we used different offset of slip sandbox model profiles to study the evolution of three-dimensional rupture propagation by strike -slip faulting. In strike-slip faults model, type of rupture propagation and width of shear zone (W) are primary affecting by depth of overburden layer (H), distances of fault slip (Sy). There are few research to trace of three-dimensional rupture behavior and propagation. Therefore, in this simplified sandbox model, investigate rupture propagation and shear zone with profiles across main faults when formation are affecting by depth of overburden layer and distances of fault slip. The investigators at the model included width of shear zone, length of rupture (L), angle of rupture (θ) and space of rupture. The surface results was follow the literature that the evolution sequence of failure envelope was R-faults, P-faults and Y-faults which are parallel to the basement fault. Comparison surface and profiles structure which were curved faces and cross each other to define 3-D rupture and width of shear zone. We found that an increase in fault slip could result in a greater width of shear zone, and proposed a W/H versus Sy/H relationship. Deformation of shear zone showed a similar trend as in the literature that the increase of fault slip resulted in the increase of W, however, the increasing trend became opposite after a peak (when Sy/H was 1) value of W was reached (small than 1.5). The results showed that the W width is limited at a constant value in 3-D models by strike-slip faulting. In conclusion, this study helps evaluate the extensions of the shear zone influenced regions for strike-slip faults.
Undesirable leakage to overlying formations with horizontal and vertical injection wells
NASA Astrophysics Data System (ADS)
Mosaheb, M.; Zeidouni, M.
2017-12-01
Deep saline aquifers are considered for underground storage of carbon dioxide. Undesirable leakage of injected CO2 to adjacent layers would disturb the storage process and can pollute shallower fresh water resources as well as atmosphere. Leaky caprocks, faults, and abandoned wells are examples of leaky pathways. In addition, the overpressure can reactivate a sealing fault or damage the caprock layer. Pressure management is applicable during the storage operation to avoid these consequences and to reduce undesirable leakage.The fluids can be injected through horizontal wells with a wider interval than vertical wells. Horizontal well injection would make less overpressure by delocalizing induced pressure especially in thin formations. In this work, numerical and analytical approaches are applied to model different leaky pathways with horizontal and vertical injection wells. we compare leakage rate and overpressure for horizontal and vertical injection wells in different leaky pathway systems. Results show that the horizontal well technology would allow high injection rates with lower leakage rate for leaky well, leaky fault, and leaky caprock cases. The overpressure would reduce considerably by horizontal well comparing to vertical well injection especially in leaky fault system. The horizontal well injection is an effective method to avoid reaching to threshold pressure of fault reactivation and prevent the consequent induced seismicity.
NASA Astrophysics Data System (ADS)
Rusu-Anghel, S.; Ene, A.
2017-05-01
The quality of electric energy capture and also the equipment operational safety depend essentially of the technical state of the contact line (CL). The present method for determining the technical state of CL based on advance programming is no longer efficient, due to the faults which can occur into the not programmed areas. Therefore, they cannot be remediated. It is expected another management method for the repairing and maintenance of CL based on its real state which must be very well known. In this paper a new method for determining the faults in CL is described. It is based on the analysis of the variation of pantograph-CL contact force in dynamical regime. Using mathematical modelling and also experimental tests, it was established that each type of fault is able to generate ‘signatures’ into the contact force diagram. The identification of these signatures can be accomplished by an informatics system which will provide the fault location, its type and also in the future, the probable evolution of the CL technical state. The measuring of the contact force is realized in optical manner using a railway inspection trolley which has appropriate equipment. The analysis of the desired parameters can be accomplished in real time by a data acquisition system, based on dedicated software.
An Improved Evidential-IOWA Sensor Data Fusion Approach in Fault Diagnosis
Zhou, Deyun; Zhuang, Miaoyan; Fang, Xueyi; Xie, Chunhe
2017-01-01
As an important tool of information fusion, Dempster–Shafer evidence theory is widely applied in handling the uncertain information in fault diagnosis. However, an incorrect result may be obtained if the combined evidence is highly conflicting, which may leads to failure in locating the fault. To deal with the problem, an improved evidential-Induced Ordered Weighted Averaging (IOWA) sensor data fusion approach is proposed in the frame of Dempster–Shafer evidence theory. In the new method, the IOWA operator is used to determine the weight of different sensor data source, while determining the parameter of the IOWA, both the distance of evidence and the belief entropy are taken into consideration. First, based on the global distance of evidence and the global belief entropy, the α value of IOWA is obtained. Simultaneously, a weight vector is given based on the maximum entropy method model. Then, according to IOWA operator, the evidence are modified before applying the Dempster’s combination rule. The proposed method has a better performance in conflict management and fault diagnosis due to the fact that the information volume of each evidence is taken into consideration. A numerical example and a case study in fault diagnosis are presented to show the rationality and efficiency of the proposed method. PMID:28927017
NASA Astrophysics Data System (ADS)
Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.
2006-12-01
More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.
Automated forward mechanical modeling of wrinkle ridges on Mars
NASA Astrophysics Data System (ADS)
Nahm, Amanda; Peterson, Samuel
2016-04-01
One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.
Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
NASA Astrophysics Data System (ADS)
Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.
2013-12-01
Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.
Intelligent Engine Systems Work Element 1.3: Sub System Health Management
NASA Technical Reports Server (NTRS)
Ashby, Malcolm; Simpson, Jeffrey; Singh, Anant; Ferguson, Emily; Frontera, mark
2005-01-01
The objectives of this program were to develop health monitoring systems and physics-based fault detection models for engine sub-systems including the start, lubrication, and fuel. These models will ultimately be used to provide more effective sub-system fault identification and isolation to reduce engine maintenance costs and engine down-time. Additionally, the bearing sub-system health is addressed in this program through identification of sensing requirements, a review of available technologies and a demonstration of a demonstration of a conceptual monitoring system for a differential roller bearing. This report is divided into four sections; one for each of the subtasks. The start system subtask is documented in section 2.0, the oil system is covered in section 3.0, bearing in section 4.0, and the fuel system is presented in section 5.0.
V&V of Fault Management: Challenges and Successes
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Costello, Ken; Ohi, Don; Lu, Tiffany; Newhouse, Marilyn
2013-01-01
This paper describes the results of a special breakout session of the NASA Independent Verification and Validation (IV&V) Workshop held in the fall of 2012 entitled "V&V of Fault Management: Challenges and Successes." The NASA IV&V Program is in a unique position to interact with projects across all of the NASA development domains. Using this unique opportunity, the IV&V program convened a breakout session to enable IV&V teams to share their challenges and successes with respect to the V&V of Fault Management (FM) architectures and software. The presentations and discussions provided practical examples of pitfalls encountered while performing V&V of FM including the lack of consistent designs for implementing faults monitors and the fact that FM information is not centralized but scattered among many diverse project artifacts. The discussions also solidified the need for an early commitment to developing FM in parallel with the spacecraft systems as well as clearly defining FM terminology within a project.
NASA Astrophysics Data System (ADS)
Pinzuti, Paul; Mignan, Arnaud; King, Geoffrey C. P.
2010-10-01
Tectonic-stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localised magma intrusion, with normal faults accommodating extension and subsidence only above the maximum reach of the magma column. In these magmatic rifting models, or so-called magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Vertical profiles of normal fault scarps from levelling campaign in the Asal Rift, where normal faults seem sub-vertical at surface level, have been analysed to discuss the creation and evolution of normal faults in massive fractured rocks (basalt lava flows), using mechanical and kinematics concepts. We show that the studied normal fault planes actually have an average dip ranging between 45° and 65° and are characterised by an irregular stepped form. We suggest that these normal fault scarps correspond to sub-vertical en echelon structures, and that, at greater depth, these scarps combine and give birth to dipping normal faults. The results of our analysis are compatible with the magmatic intrusion models instead of tectonic-stretching models. The geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies thatmore » are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.« less
Growth trishear model and its application to the Gilbertown graben system, southwest Alabama
Jin, G.; Groshong, R.H.; Pashin, J.C.
2009-01-01
Fault-propagation folding associated with an upward propagating fault in the Gilbertown graben system is revealed by well-based 3-D subsurface mapping and dipmeter analysis. The fold is developed in the Selma chalk, which is an oil reservoir along the southern margin of the graben. Area-depth-strain analysis suggests that the Cretaceous strata were growth units, the Jurassic strata were pregrowth units, and the graben system is detached in the Louann Salt. The growth trishear model has been applied in this paper to study the evolution and kinematics of extensional fault-propagation folding. Models indicate that the propagation to slip (p/s) ratio of the underlying fault plays an important role in governing the geometry of the resulting extensional fault-propagation fold. With a greater p/s ratio, the fold is more localized in the vicinity of the propagating fault. The extensional fault-propagation fold in the Gilbertown graben is modeled by both a compactional and a non-compactional growth trishear model. Both models predict a similar geometry of the extensional fault-propagation fold. The trishear model with compaction best predicts the fold geometry. ?? 2008 Elsevier Ltd. All rights reserved.
A pilot GIS database of active faults of Mt. Etna (Sicily): A tool for integrated hazard evaluation
NASA Astrophysics Data System (ADS)
Barreca, Giovanni; Bonforte, Alessandro; Neri, Marco
2013-02-01
A pilot GIS-based system has been implemented for the assessment and analysis of hazard related to active faults affecting the eastern and southern flanks of Mt. Etna. The system structure was developed in ArcGis® environment and consists of different thematic datasets that include spatially-referenced arc-features and associated database. Arc-type features, georeferenced into WGS84 Ellipsoid UTM zone 33 Projection, represent the five main fault systems that develop in the analysed region. The backbone of the GIS-based system is constituted by the large amount of information which was collected from the literature and then stored and properly geocoded in a digital database. This consists of thirty five alpha-numeric fields which include all fault parameters available from literature such us location, kinematics, landform, slip rate, etc. Although the system has been implemented according to the most common procedures used by GIS developer, the architecture and content of the database represent a pilot backbone for digital storing of fault parameters, providing a powerful tool in modelling hazard related to the active tectonics of Mt. Etna. The database collects, organises and shares all scientific currently available information about the active faults of the volcano. Furthermore, thanks to the strong effort spent on defining the fields of the database, the structure proposed in this paper is open to the collection of further data coming from future improvements in the knowledge of the fault systems. By layering additional user-specific geographic information and managing the proposed database (topological querying) a great diversity of hazard and vulnerability maps can be produced by the user. This is a proposal of a backbone for a comprehensive geographical database of fault systems, universally applicable to other sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nancy J. Lybeck; Vivek Agarwal; Binh T. Pham
The Light Water Reactor Sustainability program at Idaho National Laboratory (INL) is actively conducting research to develop and demonstrate online monitoring (OLM) capabilities for active components in existing Nuclear Power Plants. A pilot project is currently underway to apply OLM to Generator Step-Up Transformers (GSUs) and Emergency Diesel Generators (EDGs). INL and the Electric Power Research Institute (EPRI) are working jointly to implement the pilot project. The EPRI Fleet-Wide Prognostic and Health Management (FW-PHM) Software Suite will be used to implement monitoring in conjunction with utility partners: the Shearon Harris Nuclear Generating Station (owned by Duke Energy for GSUs, andmore » Braidwood Generating Station (owned by Exelon Corporation) for EDGs. This report presents monitoring techniques, fault signatures, and diagnostic and prognostic models for GSUs. GSUs are main transformers that are directly connected to generators, stepping up the voltage from the generator output voltage to the highest transmission voltages for supplying electricity to the transmission grid. Technical experts from Shearon Harris are assisting INL and EPRI in identifying critical faults and defining fault signatures associated with each fault. The resulting diagnostic models will be implemented in the FW-PHM Software Suite and tested using data from Shearon-Harris. Parallel research on EDGs is being conducted, and will be reported in an interim report during the first quarter of fiscal year 2013.« less
Software Testbed for Developing and Evaluating Integrated Autonomous Subsystems
NASA Technical Reports Server (NTRS)
Ong, James; Remolina, Emilio; Prompt, Axel; Robinson, Peter; Sweet, Adam; Nishikawa, David
2015-01-01
To implement fault tolerant autonomy in future space systems, it will be necessary to integrate planning, adaptive control, and state estimation subsystems. However, integrating these subsystems is difficult, time-consuming, and error-prone. This paper describes Intelliface/ADAPT, a software testbed that helps researchers develop and test alternative strategies for integrating planning, execution, and diagnosis subsystems more quickly and easily. The testbed's architecture, graphical data displays, and implementations of the integrated subsystems support easy plug and play of alternate components to support research and development in fault-tolerant control of autonomous vehicles and operations support systems. Intelliface/ADAPT controls NASA's Advanced Diagnostics and Prognostics Testbed (ADAPT), which comprises batteries, electrical loads (fans, pumps, and lights), relays, circuit breakers, invertors, and sensors. During plan execution, an experimentor can inject faults into the ADAPT testbed by tripping circuit breakers, changing fan speed settings, and closing valves to restrict fluid flow. The diagnostic subsystem, based on NASA's Hybrid Diagnosis Engine (HyDE), detects and isolates these faults to determine the new state of the plant, ADAPT. Intelliface/ADAPT then updates its model of the ADAPT system's resources and determines whether the current plan can be executed using the reduced resources. If not, the planning subsystem generates a new plan that reschedules tasks, reconfigures ADAPT, and reassigns the use of ADAPT resources as needed to work around the fault. The resource model, planning domain model, and planning goals are expressed using NASA's Action Notation Modeling Language (ANML). Parts of the ANML model are generated automatically, and other parts are constructed by hand using the Planning Model Integrated Development Environment, a visual Eclipse-based IDE that accelerates ANML model development. Because native ANML planners are currently under development and not yet sufficiently capable, the ANML model is translated into the New Domain Definition Language (NDDL) and sent to NASA's EUROPA planning system for plan generation. The adaptive controller executes the new plan, using augmented, hierarchical finite state machines to select and sequence actions based on the state of the ADAPT system. Real-time sensor data, commands, and plans are displayed in information-dense arrays of timelines and graphs that zoom and scroll in unison. A dynamic schematic display uses color to show the real-time fault state and utilization of the system components and resources. An execution manager coordinates the activities of the other subsystems. The subsystems are integrated using the Internet Communications Engine (ICE). an object-oriented toolkit for building distributed applications.
Havens: Explicit Reliable Memory Regions for HPC Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
2016-01-01
Supporting error resilience in future exascale-class supercomputing systems is a critical challenge. Due to transistor scaling trends and increasing memory density, scientific simulations are expected to experience more interruptions caused by transient errors in the system memory. Existing hardware-based detection and recovery techniques will be inadequate to manage the presence of high memory fault rates. In this paper we propose a partial memory protection scheme based on region-based memory management. We define the concept of regions called havens that provide fault protection for program objects. We provide reliability for the regions through a software-based parity protection mechanism. Our approach enablesmore » critical program objects to be placed in these havens. The fault coverage provided by our approach is application agnostic, unlike algorithm-based fault tolerance techniques.« less
NASA Astrophysics Data System (ADS)
Budach, Ingmar; Moeck, Inga; Lüschen, Ewald; Wolfgramm, Markus
2018-03-01
The structural evolution of faults in foreland basins is linked to a complex basin history ranging from extension to contraction and inversion tectonics. Faults in the Upper Jurassic of the German Molasse Basin, a Cenozoic Alpine foreland basin, play a significant role for geothermal exploration and are therefore imaged, interpreted and studied by 3D seismic reflection data. Beyond this applied aspect, the analysis of these seismic data help to better understand the temporal evolution of faults and respective stress fields. In 2009, a 27 km2 3D seismic reflection survey was conducted around the Unterhaching Gt 2 well, south of Munich. The main focus of this study is an in-depth analysis of a prominent v-shaped fault block structure located at the center of the 3D seismic survey. Two methods were used to study the periodic fault activity and its relative age of the detected faults: (1) horizon flattening and (2) analysis of incremental fault throws. Slip and dilation tendency analyses were conducted afterwards to determine the stresses resolved on the faults in the current stress field. Two possible kinematic models explain the structural evolution: One model assumes a left-lateral strike slip fault in a transpressional regime resulting in a positive flower structure. The other model incorporates crossing conjugate normal faults within a transtensional regime. The interpreted successive fault formation prefers the latter model. The episodic fault activity may enhance fault zone permeability hence reservoir productivity implying that the analysis of periodically active faults represents an important part in successfully targeting geothermal wells.
NASA Astrophysics Data System (ADS)
Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin
2016-07-01
Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.
Zeng, Yuehua; Shen, Zheng-Kang
2017-01-01
We develop a crustal deformation model to determine fault‐slip rates for the western United States (WUS) using the Zeng and Shen (2014) method that is based on a combined inversion of Global Positioning System (GPS) velocities and geological slip‐rate constraints. The model consists of six blocks with boundaries aligned along major faults in California and the Cascadia subduction zone, which are represented as buried dislocations in the Earth. Faults distributed within blocks have their geometrical structure and locking depths specified by the Uniform California Earthquake Rupture Forecast, version 3 (UCERF3) and the 2008 U.S. Geological Survey National Seismic Hazard Map Project model. Faults slip beneath a predefined locking depth, except for a few segments where shallow creep is allowed. The slip rates are estimated using a least‐squares inversion. The model resolution analysis shows that the resulting model is influenced heavily by geologic input, which fits the UCERF3 geologic bounds on California B faults and ±one‐half of the geologic slip rates for most other WUS faults. The modeled slip rates for the WUS faults are consistent with the observed GPS velocity field. Our fit to these velocities is measured in terms of a normalized chi‐square, which is 6.5. This updated model fits the data better than most other geodetic‐based inversion models. Major discrepancies between well‐resolved GPS inversion rates and geologic‐consensus rates occur along some of the northern California A faults, the Mojave to San Bernardino segments of the San Andreas fault, the western Garlock fault, the southern segment of the Wasatch fault, and other faults. Off‐fault strain‐rate distributions are consistent with regional tectonics, with a total off‐fault moment rate of 7.2×1018">7.2×1018 and 8.5×1018 N·m/year">8.5×1018 N⋅m/year for California and the WUS outside California, respectively.
Spectral element modelling of fault-plane reflections arising from fluid pressure distributions
Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.
2007-01-01
The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model under the assumption of weak scattering. This allows us to use the empirical relationships between density, velocity and effective stress from the South Eugene Island field to relate a slip interface to an amount of excess pore-pressure in a fault zone. ?? 2007 The Authors Journal compilation ?? 2007 RAS.
Modeling and Fault Simulation of Propellant Filling System
NASA Astrophysics Data System (ADS)
Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo
2012-05-01
Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.
Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil
2010-01-01
We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach
Technical know-how relevant to planning of borehole investigation for fault characterization
NASA Astrophysics Data System (ADS)
Mizuno, T.; Takeuchi, R.; Tsuruta, T.; Matsuoka, T.; Kunimaru, T.; Saegusa, H.
2011-12-01
As part of the national R&D program for geological disposal of high-level radioactive waste (HLW), the broad scientific study of the deep geological environment, JAEA has established the Mizunami Underground Research Laboratory (MIU) in Central Japan as a generic underground research laboratory (URL) facility. The MIU Project focuses on the crystalline rocks. In the case of fractured rock, a fault is one of the major discontinuity structures which control the groundwater flow conditions. It is important to estimate geological, hydrogeological, hydrochemical and rock mechanical characteristics of faults, and then to evaluate its role in the engineering design of repository and the assessment of long-term safety of HLW disposal. Therefore, investigations for fault characterization have been performed to estimate its characteristics and to evaluate existing conceptual and/or numerical models of the geological environment in the MIU project. Investigations related to faults have been conducted based on the conventional concept that a fault consists of a "fault core (FC)" characterized by distribution of the faulted rocks and a "fractured zone (FZ)" along FC. With the progress of investigations, furthermore, it is clear that there is also a case in which an "altered zone (AZ)" characterized by alteration of host rocks to clay minerals can be developed around the FC. Intensity of alteration in AZ generally decreases with distance from the FC, and AZ transits to FZ. Therefore, the investigation program focusing on properties of AZ is required for revising the existing conceptual and/or numerical models of geological environment. In this study, procedures for planning of fault characterizations have been summarized based on the technical know-how learnt through the MIU Project for the development of Knowledge Management System performed by JAEA under a contract with the Ministry of Economy, Trade and Industry as part of its R&D supporting program for developing geological disposal technology in 2010. Taking into account the experience from the fault characterization in the MIU Project, an optimization procedure for investigation program is summarized as follows; 1) Definition of investigation aim, 2) Confirmation of current understanding of the geological environment, 3) Specification and prioritization of the data to be obtained 4) Selection of the methodology for obtaining the data, 5) Specification of sequence of the investigations, and 6) Establishment of drilling and casing program including optional cases and taking into account potential problems. Several geological conceptual models with uncertainty of geological structures were illustrated to define the investigation aim and to confirm the current uncertainties. These models were also available to establish optional cases by predicting the type and location of potential problems. The procedures and case study related to establishment of the investigation program are summarized in this study and can be available for site characterization works conducted by the implementing body (NUMO) in future candidate areas.
The mechanics of fault-bend folding and tear-fault systems in the Niger Delta
NASA Astrophysics Data System (ADS)
Benesh, Nathan Philip
This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new map-based structural restoration techniques, we find that the tear faults have distinct displacement patterns that distinguish them from conventional strike-slip faults and reflect their roles in accommodating displacement gradients within the fold-and-thrust belt.
AGSM Functional Fault Models for Fault Isolation Project
NASA Technical Reports Server (NTRS)
Harp, Janicce Leshay
2014-01-01
This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.
Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.
2009-01-01
We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.
An effective approach for road asset management through the FDTD simulation of the GPR signal
NASA Astrophysics Data System (ADS)
Benedetto, Andrea; Pajewski, Lara; Adabi, Saba; Kusayanagi, Wolfgang; Tosti, Fabio
2015-04-01
Ground-penetrating radar is a non-destructive tool widely used in many fields of application including pavement engineering surveys. Over the last decade, the need for further breakthroughs capable to assist end-users and practitioners as decision-support systems in more effective road asset management is increasing. In more details and despite the high potential and the consolidated results obtained over years by this non-destructive tool, pavement distress manuals are still based on visual inspections, so that only the effects and not the causes of faults are generally taken into account. In this framework, the use of simulation can represent an effective solution for supporting engineers and decision-makers in understanding the deep responses of both revealed and unrevealed damages. In this study, the potential of using finite-difference time-domain simulation of the ground-penetrating radar signal is analyzed by simulating several types of flexible pavement at different center frequencies of investigation typically used for road surveys. For these purposes, the numerical simulator GprMax2D, implementing the finite-difference time-domain method, was used, proving to be a highly effective tool for detecting road faults. In more details, comparisons with simplified undisturbed modelled pavement sections were carried out showing promising agreements with theoretical expectations, and good chances for detecting the shape of damages are demonstrated. Therefore, electromagnetic modelling has proved to represent a valuable support system in diagnosing the causes of damages, even for early or unrevealed faults. Further perspectives of this research will be focused on the modelling of more complex scenarios capable to represent more accurately the real boundary conditions of road cross-sections. Acknowledgements - This work has benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2001-12-01
We present preliminary results from a 3-dimensional fault interaction model, with the fault system specified by the geometry and tectonics of the San Andreas Fault (SAF) system. We use the forward model for earthquake generation on interacting faults of Fitzenz and Miller [2001] that incorporates the analytical solutions of Okada [85,92], GPS-constrained tectonic loading, creep compaction and frictional dilatancy [Sleep and Blanpied, 1994, Sleep, 1995], and undrained poro-elasticity. The model fault system is centered at the Big Bend, and includes three large strike-slip faults (each discretized into multiple subfaults); 1) a 300km, right-lateral segment of the SAF to the North, 2) a 200km-long left-lateral segment of the Garlock fault to the East, and 3) a 100km-long right-lateral segment of the SAF to the South. In the initial configuration, three shallow-dipping faults are also included that correspond to the thrust belt sub-parallel to the SAF. Tectonic loading is decomposed into basal shear drag parallel to the plate boundary with a 35mm yr-1 plate velocity, and East-West compression approximated by a vertical dislocation surface applied at the far-field boundary resulting in fault-normal compression rates in the model space about 4mm yr-1. Our aim is to study the long-term seismicity characteristics, tectonic evolution, and fault interaction of this system. We find that overpressured faults through creep compaction are a necessary consequence of the tectonic loading, specifically where high normal stress acts on long straight fault segments. The optimal orientation of thrust faults is a function of the strike-slip behavior, and therefore results in a complex stress state in the elastic body. This stress state is then used to generate new fault surfaces, and preliminary results of dynamically generated faults will also be presented. Our long-term aim is to target measurable properties in or around fault zones, (e.g. pore pressures, hydrofractures, seismicity catalogs, stress orientation, surface strain, triggering, etc.), which may allow inferences on the stress state of fault systems.
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yu; Guo, Jianqiu; Goue, Ouloide
Recently, we reported on the formation of overlapping rhombus-shaped stacking faults from scratches left over by the chemical mechanical polishing during high temperature annealing of PVT-grown 4H–SiC wafer. These stacking faults are restricted to regions with high N-doped areas of the wafer. The type of these stacking faults were determined to be Shockley stacking faults by analyzing the behavior of their area contrast using synchrotron white beam X-ray topography studies. A model was proposed to explain the formation mechanism of the rhombus shaped stacking faults based on double Shockley fault nucleation and propagation. In this paper, we have experimentally verifiedmore » this model by characterizing the configuration of the bounding partials of the stacking faults on both surfaces using synchrotron topography in back reflection geometry. As predicted by the model, on both the Si and C faces, the leading partials bounding the rhombus-shaped stacking faults are 30° Si-core and the trailing partials are 30° C-core. Finally, using high resolution transmission electron microscopy, we have verified that the enclosed stacking fault is a double Shockley type.« less
NASA Astrophysics Data System (ADS)
Koyi, Hemin; Nilfouroushan, Faramarz; Hessami, Khaled
2015-04-01
A series of scaled analogue models are run to study the degree of coupling between basement block kinematics and cover deformation. In these models, rigid basal blocks were rotated about vertical axis in a "bookshelf" fashion, which caused strike-slip faulting along the blocks and, to some degrees, in the overlying cover units of loose sand. Three different combinations of cover basement deformations are modeled; cover shortening prior to basement fault movement; basement fault movement prior to shortening of cover units; and simultaneous cover shortening with basement fault movement. Model results show that the effect of basement strike-slip faults depends on the timing of their reactivation during the orogenic process. Pre- and syn-orogen basement strike-slip faults have a significant impact on the structural pattern of the cover units, whereas post-orogenic basement strike-slip faults have less influence on the thickened hinterland of the overlying fold-and-thrust belt. The interaction of basement faulting and cover shortening results in formation of rhomb features. In models with pre- and syn-orogen basement strike-slip faults, rhomb-shaped cover blocks develop as a result of shortening of the overlying cover during basement strike-slip faulting. These rhombic blocks, which have resemblance to flower structures, differ in kinematics, genesis and structural extent. They are bounded by strike-slip faults on two opposite sides and thrusts on the other two sides. In the models, rhomb-shaped cover blocks develop as a result of shortening of the overlying cover during basement strke-slip faulting. Such rhomb features are recognized in the Alborz and Zagros fold-and-thrust belts where cover units are shortened simultaneously with strike-slip faulting in the basement. Model results are also compared with geodetic results obtained from combination of all available GPS velocities in the Zagros and Alborz FTBs. Geodetic results indicate domains of clockwise and anticlockwise rotation in these two FTBs. The typical pattern of structures and their spatial distributions are used to suggest clockwise block rotation of basement blocks about vertical axes and their associated strike-slip faulting in both west-central Alborz and the southeastern part of the Zagros fold-and-thrust belt.
Augmentation of the space station module power management and distribution breadboard
NASA Technical Reports Server (NTRS)
Walls, Bryan; Hall, David K.; Lollar, Louis F.
1991-01-01
The space station module power management and distribution (SSM/PMAD) breadboard models power distribution and management, including scheduling, load prioritization, and a fault detection, identification, and recovery (FDIR) system within a Space Station Freedom habitation or laboratory module. This 120 VDC system is capable of distributing up to 30 kW of power among more than 25 loads. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level consists of fast, simple (from a computing standpoint) switchgear that is capable of quickly safing the system. At the next level are local load center processors, (LLP's) which execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. Above the LLP's are three cooperating artificial intelligence (AI) systems which manage load prioritizations, load scheduling, load shedding, and fault recovery and management. Recent upgrades to hardware and modifications to software at both the LLP and AI system levels promise a drastic increase in speed, a significant increase in functionality and reliability, and potential for further examination of advanced automation techniques. The background, SSM/PMAD, interface to the Lewis Research Center test bed, the large autonomous spacecraft electrical power system, and future plans are discussed.
NASA Astrophysics Data System (ADS)
Adib, Ahmad; Afzal, Peyman; Mirzaei Ilani, Shapour; Aliyari, Farhang
2017-10-01
The aim of this study is to determine a relationship between zinc mineralization and a major fault in the Behabad area, central Iran, using the Concentration-Distance to Major Fault (C-DMF), Area of Mineralized Zone-Distance to Major Fault (AMZ-DMF), and Concentration-Area (C-A) fractal models for Zn deposit/mine classification according to their distance from the Behabad fault. Application of the C-DMF and the AMZ-DMF models for Zn mineralization classification in the Behabad fault zone reveals that the main Zn deposits have a good correlation with the major fault in the area. The distance from the known zinc deposits/mines with Zn values higher than 29% and the area of the mineralized zone of more than 900 m2 to the major fault is lower than 1 km, which shows a positive correlation between Zn mineralization and the structural zone. As a result, the AMZ-DMF and C-DMF fractal models can be utilized for the delineation and the recognition of different mineralized zones in different types of magmatic and hydrothermal deposits.
NASA Technical Reports Server (NTRS)
Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven
2010-01-01
Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.
NASA Astrophysics Data System (ADS)
Yang, H.; Moresi, L. N.
2017-12-01
The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient < 0.1) is requisite. To the first order, there is significant density difference between the Great Valley and the adjacent Mojave block. The Great Valley block is much colder and of larger density (>200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.
NASA Technical Reports Server (NTRS)
Bird, P.; Baumgardner, J.
1984-01-01
To determine the correct fault rheology of the Transverse Ranges area of California, a new finite element to represent faults and a mangle drag element are introduced into a set of 63 simulation models of anelastic crustal strain. It is shown that a slip rate weakening rheology for faults is not valid in California. Assuming that mantle drag effects on the crust's base are minimal, the optimal coefficient of friction in the seismogenic portion of the fault zones is 0.4-0.6 (less than Byerly's law assumed to apply elsewhere). Depending on how the southern California upper mantle seismic velocity anomaly is interpreted, model results are improved or degraded. It is found that the location of the mantle plate boundary is the most important secondary parameter, and that the best model is either a low-stress model (fault friction = 0.3) or a high-stress model (fault friction = 0.85), each of which has strong mantel drag. It is concluded that at least the fastest moving faults in southern California have a low friction coefficient (approximtely 0.3) because they contain low strength hydrated clay gouges throughout the low-temperature seismogenic zone.
Analysis of a hardware and software fault tolerant processor for critical applications
NASA Technical Reports Server (NTRS)
Dugan, Joanne B.
1993-01-01
Computer systems for critical applications must be designed to tolerate software faults as well as hardware faults. A unified approach to tolerating hardware and software faults is characterized by classifying faults in terms of duration (transient or permanent) rather than source (hardware or software). Errors arising from transient faults can be handled through masking or voting, but errors arising from permanent faults require system reconfiguration to bypass the failed component. Most errors which are caused by software faults can be considered transient, in that they are input-dependent. Software faults are triggered by a particular set of inputs. Quantitative dependability analysis of systems which exhibit a unified approach to fault tolerance can be performed by a hierarchical combination of fault tree and Markov models. A methodology for analyzing hardware and software fault tolerant systems is applied to the analysis of a hypothetical system, loosely based on the Fault Tolerant Parallel Processor. The models consider both transient and permanent faults, hardware and software faults, independent and related software faults, automatic recovery, and reconfiguration.
Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis
Castro, Alfonso; Sedano, Andrés A.; García, Fco. Javier; Villoslada, Eduardo
2017-01-01
Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica’s global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam. PMID:29283398
Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis.
Castro, Alfonso; Sedano, Andrés A; García, Fco Javier; Villoslada, Eduardo; Villagrá, Víctor A
2017-12-28
Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica's global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
Modelling earthquake ruptures with dynamic off-fault damage
NASA Astrophysics Data System (ADS)
Okubo, Kurama; Bhat, Harsha S.; Klinger, Yann; Rougier, Esteban
2017-04-01
Earthquake rupture modelling has been developed for producing scenario earthquakes. This includes understanding the source mechanisms and estimating far-field ground motion with given a priori constraints like fault geometry, constitutive law of the medium and friction law operating on the fault. It is necessary to consider all of the above complexities of a fault systems to conduct realistic earthquake rupture modelling. In addition to the complexity of the fault geometry in nature, coseismic off-fault damage, which is observed by a variety of geological and seismological methods, plays a considerable role on the resultant ground motion and its spectrum compared to a model with simple planer fault surrounded by purely elastic media. Ideally all of these complexities should be considered in earthquake modelling. State of the art techniques developed so far, however, cannot treat all of them simultaneously due to a variety of computational restrictions. Therefore, we adopt the combined finite-discrete element method (FDEM), which can effectively deal with pre-existing complex fault geometry such as fault branches and kinks and can describe coseismic off-fault damage generated during the dynamic rupture. The advantage of FDEM is that it can handle a wide range of length scales, from metric to kilometric scale, corresponding to the off-fault damage and complex fault geometry respectively. We used the FDEM-based software tool called HOSSedu (Hybrid Optimization Software Suite - Educational Version) for the earthquake rupture modelling, which was developed by Los Alamos National Laboratory. We firstly conducted the cross-validation of this new methodology against other conventional numerical schemes such as the finite difference method (FDM), the spectral element method (SEM) and the boundary integral equation method (BIEM), to evaluate the accuracy with various element sizes and artificial viscous damping values. We demonstrate the capability of the FDEM tool for modelling earthquake ruptures. We then modelled earthquake ruptures allowing for coseismic off-fault damage with appropriate fracture nucleation and growth criteria. We studied the effect of different conditions such as rupture speed (sub-Rayleigh or supershear), the orientation of the initial maximum principal stress with respect to the fault and the magnitude of the initial stress (to mimic depth). The comparison between the sub-Rayleigh and supershear case shows that the coseismic off-fault damage is enhanced in the supershear case when compared with the sub-Rayleigh case. The orientation of the maximum principal stress also has significant difference such that the dynamic off-fault cracking is more likely to occur on the extensional side of the fault for high principal stress orientation. It is found that the coseismic off-fault damage reduces the rupture speed due to the dissipation of the energy by dynamic off-fault cracking generated in the vicinity of the rupture front. In terms of the ground motion amplitude spectra it is shown that the high-frequency radiation is enhanced by the coseismic off-fault damage though it is quickly attenuated. This is caused by the intricate superposition of the radiation generated by the off-fault damage and the perturbation of the rupture speed on the main fault.
NASA Astrophysics Data System (ADS)
Zuza, Andrew V.; Yin, An
2016-05-01
Collision-induced continental deformation commonly involves complex interactions between strike-slip faulting and off-fault deformation, yet this relationship has rarely been quantified. In northern Tibet, Cenozoic deformation is expressed by the development of the > 1000-km-long east-striking left-slip Kunlun, Qinling, and Haiyuan faults. Each have a maximum slip in the central fault segment exceeding 10s to ~ 100 km but a much smaller slip magnitude (~< 10% of the maximum slip) at their terminations. The along-strike variation of fault offsets and pervasive off-fault deformation create a strain pattern that departs from the expectations of the classic plate-like rigid-body motion and flow-like distributed deformation end-member models for continental tectonics. Here we propose a non-rigid bookshelf-fault model for the Cenozoic tectonic development of northern Tibet. Our model, quantitatively relating discrete left-slip faulting to distributed off-fault deformation during regional clockwise rotation, explains several puzzling features, including the: (1) clockwise rotation of east-striking left-slip faults against the northeast-striking left-slip Altyn Tagh fault along the northwestern margin of the Tibetan Plateau, (2) alternating fault-parallel extension and shortening in the off-fault regions, and (3) eastward-tapering map-view geometries of the Qimen Tagh, Qaidam, and Qilian Shan thrust belts that link with the three major left-slip faults in northern Tibet. We refer to this specific non-rigid bookshelf-fault system as a passive bookshelf-fault system because the rotating bookshelf panels are detached from the rigid bounding domains. As a consequence, the wallrock of the strike-slip faults deforms to accommodate both the clockwise rotation of the left-slip faults and off-fault strain that arises at the fault ends. An important implication of our model is that the style and magnitude of Cenozoic deformation in northern Tibet vary considerably in the east-west direction. Thus, any single north-south cross section and its kinematic reconstruction through the region do not properly quantify the complex deformational processes of plateau formation.
A multiple fault rupture model of the November 13 2016, M 7.8 Kaikoura earthquake, New Zealand
NASA Astrophysics Data System (ADS)
Benites, R. A.; Francois-Holden, C.; Langridge, R. M.; Kaneko, Y.; Fry, B.; Kaiser, A. E.; Caldwell, T. G.
2017-12-01
The rupture-history of the November 13 2016 MW7.8 Kaikoura earthquake recorded by near- and intermediate-field strong-motion seismometers and 2 high-rate GPS stations reveals a complex cascade of multiple crustal fault rupture. In spite of such complexity, we show that the rupture history of each fault is well approximated by simple kinematic model with uniform slip and rupture velocity. Using 9 faults embedded in a crustal layer 19 km thick, each with a prescribed slip vector and rupture velocity, this model accurately reproduces the displacement waveforms recorded at the near-field strong-motion and GPS stations. This model includes the `Papatea Fault' with a mixed thrust and strike-slip mechanism based on in-situ geological observations with up to 8 m of uplift observed. Although the kinematic model fits the ground-motion at the nearest strong station, it doesn not reproduce the one sided nature of the static deformation field observed geodetically. This suggests a dislocation based approach does not completely capture the mechanical response of the Papatea Fault. The fault system as a whole extends for approximately 150 km along the eastern side of the Marlborough fault system in the South Island of New Zealand. The total duration of the rupture was 74 seconds. The timing and location of each fault's rupture suggests fault interaction and triggering resulting in a northward cascade crustal ruptures. Our model does not require rupture of the underlying subduction interface to explain the data.
Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics
NASA Astrophysics Data System (ADS)
Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei
2014-11-01
In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.
The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator.
Roh, S D; Kim, S W; Cho, W S
2001-10-01
The numerical modelling and process simulation for the fault diagnosis of rotary kiln incinerator were accomplished. In the numerical modelling, two models applied to the modelling within the kiln are the combustion chamber model including the mass and energy balance equations for two combustion chambers and 3D thermal model. The combustion chamber model predicts temperature within the kiln, flue gas composition, flux and heat of combustion. Using the combustion chamber model and 3D thermal model, the production-rules for the process simulation can be obtained through interrelation analysis between control and operation variables. The process simulation of the kiln is operated with the production-rules for automatic operation. The process simulation aims to provide fundamental solutions to the problems in incineration process by introducing an online expert control system to provide an integrity in process control and management. Knowledge-based expert control systems use symbolic logic and heuristic rules to find solutions for various types of problems. It was implemented to be a hybrid intelligent expert control system by mutually connecting with the process control systems which has the capability of process diagnosis, analysis and control.
NASA Astrophysics Data System (ADS)
Ritz, E.; Pollard, D. D.
2011-12-01
Geological and geophysical investigations demonstrate that faults are geometrically complex structures, and that the nature and intensity of off-fault damage is spatially correlated with geometric irregularities of the slip surfaces. Geologic observations of exhumed meter-scale strike-slip faults in the Bear Creek drainage, central Sierra Nevada, CA, provide insight into the relationship between non-planar fault geometry and frictional slip at depth. We investigate natural fault geometries in an otherwise homogeneous and isotropic elastic material with a two-dimensional displacement discontinuity method (DDM). Although the DDM is a powerful tool, frictional contact problems are beyond the scope of the elementary implementation because it allows interpenetration of the crack surfaces. By incorporating a complementarity algorithm, we are able to enforce appropriate contact boundary conditions along the model faults and include variable friction and frictional strength. This tool allows us to model quasi-static slip on non-planar faults and the resulting deformation of the surrounding rock. Both field observations and numerical investigations indicate that sliding along geometrically discontinuous or irregular faults may lead to opening of the fault and the formation of new fractures, affecting permeability in the nearby rock mass and consequently impacting pore fluid pressure. Numerical simulations of natural fault geometries provide local stress fields that are correlated to the style and spatial distribution of off-fault damage. We also show how varying the friction and frictional strength along the model faults affects slip surface behavior and consequently influences the stress distributions in the adjacent material.
Advanced Ground Systems Maintenance Functional Fault Models For Fault Isolation Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M. (Compiler)
2014-01-01
This project implements functional fault models (FFM) to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.
NASA Astrophysics Data System (ADS)
Pinzuti, P.; Mignan, A.; King, G. C.
2009-12-01
Mechanical stretching models have been previously proposed to explain the process of continental break-up through the example of the Asal Rift, Djibouti, one of the few places where the early stages of seafloor spreading can be observed. In these models, deformation is distributed starting at the base of a shallow seismogenic zone, in which sub-vertical normal faults are responsible for subsidence whereas cracks accommodate extension. Alternative models suggest that extension results from localized magma injection, with normal faults accommodating extension and subsidence above the maximum reach of the magma column. In these magmatic intrusion models, normal faults have dips of 45-55° and root into dikes. Using mechanical and kinematics concepts and vertical profiles of normal fault scarps from an Asal Rift campaign, where normal faults are sub-vertical on surface level, we discuss the creation and evolution of normal faults in massive fractured rocks (basalt). We suggest that the observed fault scarps correspond to sub-vertical en echelon structures and that at greater depth, these scarps combine and give birth to dipping normal faults. Finally, the geometry of faulting between the Fieale volcano and Lake Asal in the Asal Rift can be simply related to the depth of diking, which in turn can be related to magma supply. This new view supports the magmatic intrusion model of early stages of continental breaking.
Solar Photovoltaic (PV) Distributed Generation Systems - Control and Protection
NASA Astrophysics Data System (ADS)
Yi, Zhehan
This dissertation proposes a comprehensive control, power management, and fault detection strategy for solar photovoltaic (PV) distribution generations. Battery storages are typically employed in PV systems to mitigate the power fluctuation caused by unstable solar irradiance. With AC and DC loads, a PV-battery system can be treated as a hybrid microgrid which contains both DC and AC power resources and buses. In this thesis, a control power and management system (CAPMS) for PV-battery hybrid microgrid is proposed, which provides 1) the DC and AC bus voltage and AC frequency regulating scheme and controllers designed to track set points; 2) a power flow management strategy in the hybrid microgrid to achieve system generation and demand balance in both grid-connected and islanded modes; 3) smooth transition control during grid reconnection by frequency and phase synchronization control between the main grid and microgrid. Due to the increasing demands for PV power, scales of PV systems are getting larger and fault detection in PV arrays becomes challenging. High-impedance faults, low-mismatch faults, and faults occurred in low irradiance conditions tend to be hidden due to low fault currents, particularly, when a PV maximum power point tracking (MPPT) algorithm is in-service. If remain undetected, these faults can considerably lower the output energy of solar systems, damage the panels, and potentially cause fire hazards. In this dissertation, fault detection challenges in PV arrays are analyzed in depth, considering the crossing relations among the characteristics of PV, interactions with MPPT algorithms, and the nature of solar irradiance. Two fault detection schemes are then designed as attempts to address these technical issues, which detect faults inside PV arrays accurately even under challenging circumstances, e.g., faults in low irradiance conditions or high-impedance faults. Taking advantage of multi-resolution signal decomposition (MSD), a powerful signal processing technique based on discrete wavelet transformation (DWT), the first attempt is devised, which extracts the features of both line-to-line (L-L) and line-to-ground (L-G) faults and employs a fuzzy inference system (FIS) for the decision-making stage of fault detection. This scheme is then improved as the second attempt by further studying the system's behaviors during L-L faults, extracting more efficient fault features, and devising a more advanced decision-making stage: the two-stage support vector machine (SVM). For the first time, the two-stage SVM method is proposed in this dissertation to detect L-L faults in PV system with satisfactory accuracies. Numerous simulation and experimental case studies are carried out to verify the proposed control and protection strategies. Simulation environment is set up using the PSCAD/EMTDC and Matlab/Simulink software packages. Experimental case studies are conducted in a PV-battery hybrid microgrid using the dSPACE real-time controller to demonstrate the ease of hardware implementation and the controller performance. Another small-scale grid-connected PV system is set up to verify both fault detection algorithms which demonstrate promising performances and fault detecting accuracies.
NASA Astrophysics Data System (ADS)
Tsai, M. C.; Hu, J. C.; Yang, Y. H.; Hashimoto, M.; Aurelio, M.; Su, Z.; Escudero, J. A.
2017-12-01
Multi-sight and high spatial resolution interferometric SAR data enhances our ability for mapping detailed coseismic deformation to estimate fault rupture model and to infer the Coulomb stress change associated with a big earthquake. Here, we use multi-sight coseismic interferograms acquired by ALOS-2 and Sentinel-1A satellites to estimate the fault geometry and slip distribution on the fault plane of the 2017 Mw 6.5 Ormoc Earthquake in Leyte island of Philippine. The best fitting model predicts that the coseismic rupture occurs along a fault plane with strike of 325.8º and dip of 78.5ºE. This model infers that the rupture of 2017 Ormoc earthquake is dominated by left-lateral slip with minor dip-slip motion, consistent with the left-lateral strike-slip Philippine fault system. The fault tip has propagated to the ground surface, and the predicted coseismic slip on the surface is about 1 m located at 6.5 km Northeast of Kananga city. Significant slip is concentrated on the fault patches at depth of 0-8 km and an along-strike distance of 20 km with varying slip magnitude from 0.3 m to 2.3 m along the southwest segment of this seismogenic fault. Two minor coseismic fault patches are predicted underneath of the Tononan geothermal field and the creeping segment of the northwest portion of this seismogenic fault. This implies that the high geothermal gradient underneath of the Tongonan geothermal filed could prevent heated rock mass from the coseismic failure. The seismic moment release of our preferred fault model is 7.78×1018 Nm, equivalent to Mw 6.6 event. The Coulomb failure stress (CFS) calculated by the preferred fault model predicts significant positive CFS change on the northwest segment of the Philippine fault in Leyte Island which has coseismic slip deficit and is absent from aftershocks. Consequently, this segment should be considered to have increasing of risk for future seismic hazard.
Distributed deformation and block rotation in 3D
NASA Technical Reports Server (NTRS)
Scotti, Oona; Nur, Amos; Estevez, Raul
1990-01-01
The authors address how block rotation and complex distributed deformation in the Earth's shallow crust may be explained within a stationary regional stress field. Distributed deformation is characterized by domains of sub-parallel fault-bounded blocks. In response to the contemporaneous activity of neighboring domains some domains rotate, as suggested by both structural and paleomagnetic evidence. Rotations within domains are achieved through the contemporaneous slip and rotation of the faults and of the blocks they bound. Thus, in regions of distributed deformation, faults must remain active in spite of their poor orientation in the stress field. The authors developed a model that tracks the orientation of blocks and their bounding faults during rotation in a 3D stress field. In the model, the effective stress magnitudes of the principal stresses (sigma sub 1, sigma sub 2, and sigma sub 3) are controlled by the orientation of fault sets in each domain. Therefore, adjacent fault sets with differing orientations may be active and may display differing faulting styles, and a given set of faults may change its style of motion as it rotates within a stationary stress regime. The style of faulting predicted by the model depends on a dimensionless parameter phi = (sigma sub 2 - sigma sub 3)/(sigma sub 1 - sigma sub 3). Thus, the authors present a model for complex distributed deformation and complex offset history requiring neither geographical nor temporal changes in the stress regime. They apply the model to the Western Transverse Range domain of southern California. There, it is mechanically feasible for blocks and faults to have experienced up to 75 degrees of clockwise rotation in a phi = 0.1 strike-slip stress regime. The results of the model suggest that this domain may first have accommodated deformation along preexisting NNE-SSW faults, reactivated as normal faults. After rotation, these same faults became strike-slip in nature.
NASA Astrophysics Data System (ADS)
Abe, Steffen; Krieger, Lars; Deckert, Hagen
2017-04-01
The changes of fluid pressures related to the injection of fluids into the deep underground, for example during geothermal energy production, can potentially reactivate faults and thus cause induced seismic events. Therefore, an important aspect in the planning and operation of such projects, in particular in densely populated regions such as the Upper Rhine Graben in Germany, is the estimation and mitigation of the induced seismic risk. The occurrence of induced seismicity depends on a combination of hydraulic properties of the underground, mechanical and geometric parameters of the fault, and the fluid injection regime. In this study we are therefore employing a numerical model to investigate the impact of fluid pressure changes on the dynamics of the faults and the resulting seismicity. The approach combines a model of the fluid flow around a geothermal well based on a 3D finite difference discretisation of the Darcy-equation with a 2D block-slider model of a fault. The models are coupled so that the evolving pore pressure at the relevant locations of the hydraulic model is taken into account in the calculation of the stick-slip dynamics of the fault model. Our modelling approach uses two subsequent modelling steps. Initially, the fault model is run by applying a fixed deformation rate for a given duration and without the influence of the hydraulic model in order to generate the background event statistics. Initial tests have shown that the response of the fault to hydraulic loading depends on the timing of the fluid injection relative to the seismic cycle of the fault. Therefore, multiple snapshots of the fault's stress- and displacement state are generated from the fault model. In a second step, these snapshots are then used as initial conditions in a set of coupled hydro-mechanical model runs including the effects of the fluid injection. This set of models is then compared with the background event statistics to evaluate the change in the probability of seismic events. The event data such as location, magnitude, and source characteristics can be used as input for numerical wave propagation models. This allows the translation of seismic event statistics generated by the model into ground shaking probabilities.
Simultaneous Sensor and Process Fault Diagnostics for Propellant Feed System
NASA Technical Reports Server (NTRS)
Cao, J.; Kwan, C.; Figueroa, F.; Xu, R.
2006-01-01
The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.
Diagnosing a Strong-Fault Model by Conflict and Consistency
Zhou, Gan; Feng, Wenquan
2018-01-01
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302
M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model
Parsons, Thomas E.
2006-01-01
Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.
A mechanical model of the San Andreas fault and SAFOD Pilot Hole stress measurements
Chery, J.; Zoback, M.D.; Hickman, S.
2004-01-01
Stress measurements made in the SAFOD pilot hole provide an opportunity to study the relation between crustal stress outside the fault zone and the stress state within it using an integrated mechanical model of a transform fault loaded in transpression. The results of this modeling indicate that only a fault model in which the effective friction is very low (<0.1) through the seismogenic thickness of the crust is capable of matching stress measurements made in both the far field and in the SAFOD pilot hole. The stress rotation measured with depth in the SAFOD pilot hole (???28??) appears to be a typical feature of a weak fault embedded in a strong crust and a weak upper mantle with laterally variable heat flow, although our best model predicts less rotation (15??) than observed. Stress magnitudes predicted by our model within the fault zone indicate low shear stress on planes parallel to the fault but a very anomalous mean stress, approximately twice the lithostatic stress. Copyright 2004 by the American Geophysical Union.
Predeployment validation of fault-tolerant systems through software-implemented fault insertion
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1989-01-01
Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.
NASA Technical Reports Server (NTRS)
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara
2010-01-01
The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.
NASA Astrophysics Data System (ADS)
Marchandon, Mathilde; Vergnolle, Mathilde; Sudhaus, Henriette; Cavalié, Olivier
2018-02-01
In this study, we reestimate the source model of the 1997 Mw 7.2 Zirkuh earthquake (northeastern Iran) by jointly optimizing intermediate-field Interferometry Synthetic Aperture Radar data and near-field optical correlation data using a two-step fault modeling procedure. First, we estimate the geometry of the multisegmented Abiz fault using a genetic algorithm. Then, we discretize the fault segments into subfaults and invert the data to image the slip distribution on the fault. Our joint-data model, although similar to the Interferometry Synthetic Aperture Radar-based model to the first order, highlights differences in the fault dip and slip distribution. Our preferred model is ˜80° west dipping in the northern part of the fault, ˜75° east dipping in the southern part and shows three disconnected high slip zones separated by low slip zones. The low slip zones are located where the Abiz fault shows geometric complexities and where the aftershocks are located. We interpret this rough slip distribution as three asperities separated by geometrical barriers that impede the rupture propagation. Finally, no shallow slip deficit is found for the overall rupture except on the central segment where it could be due to off-fault deformation in quaternary deposits.
How does damage affect rupture propagation across a fault stepover?
NASA Astrophysics Data System (ADS)
Cooke, M. L.; Savage, H. M.
2011-12-01
We investigate the potential for fault damage to influence earthquake rupture at fault step-overs using a mechanical numerical model that explicitly includes the generation of cracks around faults. We compare the off-fault fracture patterns and slip profiles generated along faults with a variety of frictional slip-weakening distances and step-over geometry. Models with greater damage facilitate the transfer of slip to the second fault. Increasing separation and decreasing the overlap distance reduces the transfer of slip across the step over. This is consistent with observations of rupture stopping at step-over separation greater than 4 km (Wesnousky, 2006). In cases of slip transfer, rupture is often passed to the second fault before the damage zone cracks of the first fault reach the second fault. This implies that stresses from the damage fracture tips are transmitted elastically to the second fault to trigger the onset of slip along the second fault. Consequently, the growth of damage facilitates transfer of rupture from one fault to another across the step-over. In addition, the rupture propagates along the damage-producing fault faster than along the rougher fault that does not produce damage. While this result seems counter to our understanding that damage slows rupture propagation, which is documented in our models with pre-existing damage, these model results are suggesting an additional process. The slip along the newly created damage may unclamp portions of the fault ahead of the rupture and promote faster rupture. We simulate the M7.1 Hector Mine Earthquake and compare the generated fracture patterns to maps of surface damage. Because along with the detailed damage pattern, we also know the stress drop during the earthquake, we may begin to constrain parameters like the slip-weakening distance along portions of the faults that ruptured in the Hector Mine earthquake.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data
Hilley, G.E.; Delong, S.; Prentice, C.; Blisniuk, K.; Arrowsmith, J.R.
2010-01-01
Models of fault scarp morphology have been previously used to infer the relative age of different fault scarps in a fault zone using labor-intensive ground surveying. We present a method for automatically extracting scarp morphologic ages within high-resolution digital topography. Scarp degradation is modeled as a diffusive mass transport process in the across-scarp direction. The second derivative of the modeled degraded fault scarp was normalized to yield the best-fitting (in a least-squared sense) scarp height at each point, and the signal-to-noise ratio identified those areas containing scarp-like topography. We applied this method to three areas along the San Andreas Fault and found correspondence between the mapped geometry of the fault and that extracted by our analysis. This suggests that the spatial distribution of scarp ages may be revealed by such an analysis, allowing the recent temporal development of a fault zone to be imaged along its length.
A fault isolation method based on the incidence matrix of an augmented system
NASA Astrophysics Data System (ADS)
Chen, Changxiong; Chen, Liping; Ding, Jianwan; Wu, Yizhong
2018-03-01
A new approach is proposed for isolating faults and fast identifying the redundant sensors of a system in this paper. By introducing fault signal as additional state variable, an augmented system model is constructed by the original system model, fault signals and sensor measurement equations. The structural properties of an augmented system model are provided in this paper. From the viewpoint of evaluating fault variables, the calculating correlations of the fault variables in the system can be found, which imply the fault isolation properties of the system. Compared with previous isolation approaches, the highlights of the new approach are that it can quickly find the faults which can be isolated using exclusive residuals, at the same time, and can identify the redundant sensors in the system, which are useful for the design of diagnosis system. The simulation of a four-tank system is reported to validate the proposed method.
Fault latency in the memory - An experimental study on VAX 11/780
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1986-01-01
Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.
NASA Astrophysics Data System (ADS)
Hughes, A. N.; Benesh, N. P.; Alt, R. C., II; Shaw, J. H.
2011-12-01
Contractional fault-related folds form as stratigraphic layers of rock are deformed due to displacement on an underlying fault. Specifically, fault-bend folds form as rock strata are displaced over non-planar faults, and fault-propagation folds form at the tips of faults as they propagate upward through sedimentary layers. Both types of structures are commonly observed in fold and thrust belts and passive margin settings throughout the world. Fault-bend and fault-propagation folds are often seen in close proximity to each other, and kinematic analysis of some fault-related folds suggests that they have undergone a transition in structural style from fault-bend to fault-propagation folding during their deformational history. Because of the similarity in conditions in which both fault-bend and fault-propagation folds are found, the circumstances that promote the formation of one of these structural styles over the other is not immediately evident. In an effort to better understand this issue, we have investigated the role of mechanical and geometric factors in the transition between fault-bend folding and fault-propagation folding using a series of models developed with the discrete element method (DEM). The DEM models employ an aggregate of circular, frictional disks that incorporate bonding at particle contacts to represent the numerical stratigraphy. A vertical wall moving at a fixed velocity drives displacement of the hanging-wall section along a pre-defined fault ramp and detachment. We utilize this setup to study the transition between fault-bend and fault-propagation folding by varying mechanical strength, stratigraphic layering, fault geometries, and boundary conditions of the model. In most circumstances, displacement of the hanging-wall leads to the development of an emergent fold as the hanging-wall material passes across the fault bend. However, in other cases, an emergent fault propagates upward through the sedimentary section, associated with the development of a steep, narrow front-limb, characteristic of fault-propagation folding. We find that the boundary conditions imposed on the far wall of the model have the strongest influence on structural style, but that other factors, such as fault dip and mechanical strengths, play secondary roles. By testing a range of values for each of the parameters, we are able to identify the range of values under which the transition occurs. Additionally, we find that the transition between fault-bend and fault-propagation folding is gradual, with structures in the transitional regime showing evidence of each structural style during a portion of their history. The primary role that boundary conditions play in determining fault-related folding style implies that the growth of natural structures may be affected by the emergence of adjacent structures, or in distal variations in detachment strengths. We explore these relationships using natural examples from various fold-and-thrust belts.
Achieving Agreement in Three Rounds With Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2015-01-01
A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
NASA Astrophysics Data System (ADS)
Norbeck, Jack H.; Horne, Roland N.
2018-05-01
The maximum expected earthquake magnitude is an important parameter in seismic hazard and risk analysis because of its strong influence on ground motion. In the context of injection-induced seismicity, the processes that control how large an earthquake will grow may be influenced by operational factors under engineering control as well as natural tectonic factors. Determining the relative influence of these effects on maximum magnitude will impact the design and implementation of induced seismicity management strategies. In this work, we apply a numerical model that considers the coupled interactions of fluid flow in faulted porous media and quasidynamic elasticity to investigate the earthquake nucleation, rupture, and arrest processes for cases of induced seismicity. We find that under certain conditions, earthquake ruptures are confined to a pressurized region along the fault with a length-scale that is set by injection operations. However, earthquakes are sometimes able to propagate as sustained ruptures outside of the zone that experienced a pressure perturbation. We propose a faulting criterion that depends primarily on the state of stress and the earthquake stress drop to characterize the transition between pressure-constrained and runaway rupture behavior.
Surveillance system and method having an operating mode partitioned fault classification model
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
NASA Astrophysics Data System (ADS)
Mahya, M. J.; Sanny, T. A.
2017-04-01
Lembang and Cimandiri fault are active faults in West Java that thread people near the faults with earthquake and surface deformation risk. To determine the deformation, GPS measurements around Lembang and Cimandiri fault was conducted then the data was processed to get the horizontal velocity at each GPS stations by Graduate Research of Earthquake and Active Tectonics (GREAT) Department of Geodesy and Geomatics Engineering Study Program, ITB. The purpose of this study is to model the displacement distribution as deformation parameter in the area along Lembang and Cimandiri fault using 2-dimensional boundary element method (BEM) using the horizontal velocity that has been corrected by the effect of Sunda plate horizontal movement as the input. The assumptions that used at the modeling stage are the deformation occurs in homogeneous and isotropic medium, and the stresses that acted on faults are in elastostatic condition. The results of modeling show that Lembang fault had left-lateral slip component and divided into two segments. A lineament oriented in southwest-northeast direction is observed near Tangkuban Perahu Mountain separating the eastern and the western segments of Lembang fault. The displacement pattern of Cimandiri fault shows that Cimandiri fault is divided into the eastern segment with right-lateral slip component and the western segment with left-lateral slip component separated by a northwest-southeast oriented lineament at the western part of Gede Pangrango Mountain. The displacement value between Lembang and Cimandiri fault is nearly zero indicating that Lembang and Cimandiri fault are not connected each other and this area is relatively safe for infrastructure development.
Comparison of fault-related folding algorithms to restore a fold-and-thrust-belt
NASA Astrophysics Data System (ADS)
Brandes, Christian; Tanner, David
2017-04-01
Fault-related folding means the contemporaneous evolution of folds as a consequence of fault movement. It is a common deformation process in the upper crust that occurs worldwide in accretionary wedges, fold-and-thrust belts, and intra-plate settings, in either strike-slip, compressional, or extensional regimes. Over the last 30 years different algorithms have been developed to simulate the kinematic evolution of fault-related folds. All these models of fault-related folding include similar simplifications and limitations and use the same kinematic behaviour throughout the model (Brandes & Tanner, 2014). We used a natural example of fault-related folding from the Limón fold-and-thrust belt in eastern Costa Rica to test two different algorithms and to compare the resulting geometries. A thrust fault and its hanging-wall anticline were restored using both the trishear method (Allmendinger, 1998; Zehnder & Allmendinger, 2000) and the fault-parallel flow approach (Ziesch et al. 2014); both methods are widely used in academia and industry. The resulting hanging-wall folds above the thrust fault are restored in substantially different fashions. This is largely a function of the propagation-to-slip ratio of the thrust, which controls the geometry of the related anticline. Understanding the controlling factors for anticline evolution is important for the evaluation of potential hydrocarbon reservoirs and the characterization of fault processes. References: Allmendinger, R.W., 1998. Inverse and forward numerical modeling of trishear fault propagation folds. Tectonics, 17, 640-656. Brandes, C., Tanner, D.C. 2014. Fault-related folding: a review of kinematic models and their application. Earth Science Reviews, 138, 352-370. Zehnder, A.T., Allmendinger, R.W., 2000. Velocity field for the trishear model. Journal of Structural Geology, 22, 1009-1014. Ziesch, J., Tanner, D.C., Krawczyk, C.M. 2014. Strain associated with the fault-parallel flow algorithm during kinematic fault displacement. Mathematical Geosciences, 46(1), 59-73.
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
Deformation pattern during normal faulting: A sequential limit analysis
NASA Astrophysics Data System (ADS)
Yuan, X. P.; Maillot, B.; Leroy, Y. M.
2017-02-01
We model in 2-D the formation and development of half-graben faults above a low-angle normal detachment fault. The model, based on a "sequential limit analysis" accounting for mechanical equilibrium and energy dissipation, simulates the incremental deformation of a frictional, cohesive, and fluid-saturated rock wedge above the detachment. Two modes of deformation, gravitational collapse and tectonic collapse, are revealed which compare well with the results of the critical Coulomb wedge theory. We additionally show that the fault and the axial surface of the half-graben rotate as topographic subsidence increases. This progressive rotation makes some of the footwall material being sheared and entering into the hanging wall, creating a specific region called foot-to-hanging wall (FHW). The model allows introducing additional effects, such as weakening of the faults once they have slipped and sedimentation in their hanging wall. These processes are shown to control the size of the FHW region and the number of fault-bounded blocks it eventually contains. Fault weakening tends to make fault rotation more discontinuous and this results in the FHW zone containing multiple blocks of intact material separated by faults. By compensating the topographic subsidence of the half-graben, sedimentation tends to slow the fault rotation and this results in the reduction of the size of the FHW zone and of its number of fault-bounded blocks. We apply the new approach to reproduce the faults observed along a seismic line in the Southern Jeanne d'Arc Basin, Grand Banks, offshore Newfoundland. There, a single block exists in the hanging wall of the principal fault. The model explains well this situation provided that a slow sedimentation rate in the Lower Jurassic is proposed followed by an increasing rate over time as the main detachment fault was growing.
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
NASA Technical Reports Server (NTRS)
Holden, K.L.; Boyer, J.L.; Sandor, A.; Thompson, S.G.; McCann, R.S.; Begault, D.R.; Adelstein, B.D.; Beutter, B.R.; Stone, L.S.
2009-01-01
The goal of the Information Presentation Directed Research Project (DRP) is to address design questions related to the presentation of information to the crew. The major areas of work, or subtasks, within this DRP are: 1) Displays, 2) Controls, 3) Electronic Procedures and Fault Management, and 4) Human Performance Modeling. This DRP is a collaborative effort between researchers at Johnson Space Center and Ames Research Center.
Detection and diagnosis of bearing and cutting tool faults using hidden Markov models
NASA Astrophysics Data System (ADS)
Boutros, Tony; Liang, Ming
2011-08-01
Over the last few decades, the research for new fault detection and diagnosis techniques in machining processes and rotating machinery has attracted increasing interest worldwide. This development was mainly stimulated by the rapid advance in industrial technologies and the increase in complexity of machining and machinery systems. In this study, the discrete hidden Markov model (HMM) is applied to detect and diagnose mechanical faults. The technique is tested and validated successfully using two scenarios: tool wear/fracture and bearing faults. In the first case the model correctly detected the state of the tool (i.e., sharp, worn, or broken) whereas in the second application, the model classified the severity of the fault seeded in two different engine bearings. The success rate obtained in our tests for fault severity classification was above 95%. In addition to the fault severity, a location index was developed to determine the fault location. This index has been applied to determine the location (inner race, ball, or outer race) of a bearing fault with an average success rate of 96%. The training time required to develop the HMMs was less than 5 s in both the monitoring cases.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-04-24
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less
Casale, Gabriele; Pratt, Thomas L.
2015-01-01
The Yakima fold and thrust belt (YFTB) deforms the Columbia River Basalt Group flows of Washington State. The YFTB fault geometries and slip rates are crucial parameters for seismic‐hazard assessments of nearby dams and nuclear facilities, yet there are competing models for the subsurface fault geometry involving shallowly rooted versus deeply rooted fault systems. The YFTB is also thought to be analogous to the evenly spaced wrinkle ridges found on other terrestrial planets. Using seismic reflection data, borehole logs, and surface geologic data, we tested two proposed kinematic end‐member thick‐ and thin‐skinned fault models beneath the Saddle Mountains anticline of the YFTB. Observed subsurface geometry can be produced by 600–800 m of heave along a single listric‐reverse fault or ∼3.5 km of slip along two superposed low‐angle thrust faults. Both models require decollement slip between 7 and 9 km depth, resulting in greater fault areas than sometimes assumed in hazard assessments. Both models require initial slip much earlier than previously thought and may provide insight into the subsurface geometry of analogous comparisons to wrinkle ridges observed on other planets.
Fault Detection for Automotive Shock Absorber
NASA Astrophysics Data System (ADS)
Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis
2015-11-01
Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.
Ductile bookshelf faulting: A new kinematic model for Cenozoic deformation in northern Tibet
NASA Astrophysics Data System (ADS)
Zuza, A. V.; Yin, A.
2013-12-01
It has been long recognized that the most dominant features on the northern Tibetan Plateau are the >1000 km left-slip strike-slip faults (e.g., the Atyn Tagh, Kunlun, and Haiyuan faults). Early workers used the presence of these faults, especially the Kunlun and Haiyuan faults, as evidence for eastward lateral extrusion of the plateau, but their low documented offsets--100s of km or less--can not account for the 2500 km of convergence between India and Asia. Instead, these faults may result from north-south right-lateral simple shear due to the northward indentation of India, which leads to the clockwise rotation of the strike-slip faults and left-lateral slip (i.e., bookshelf faulting). With this idea, deformation is still localized on discrete fault planes, and 'microplates' or blocks rotate and/or translate with little internal deformation. As significant internal deformation occurs across northern Tibet within strike-slip-bounded domains, there is need for a coherent model to describe all of the deformational features. We also note the following: (1) geologic offsets and Quaternary slip rates of both the Kunlun and Haiyuan faults vary along strike and appear to diminish to the east, (2) the faults appear to kinematically link with thrust belts (e.g., Qilian Shan, Liupan Shan, Longmen Shan, and Qimen Tagh) and extensional zones (e.g., Shanxi, Yinchuan, and Qinling grabens), and (3) temporal relationships between the major deformation zones and the strike-slip faults (e.g., simultaneous enhanced deformation and offset in the Qilian Shan and Liupan Shan, and the Haiyuan fault, at 8 Ma). We propose a new kinematic model to describe the active deformation in northern Tibet: a ductile-bookshelf-faulting model. With this model, right-lateral simple shear leads to clockwise vertical axis rotation of the Qaidam and Qilian blocks, and left-slip faulting. This motion creates regions of compression and extension, dependent on the local boundary conditions (e.g., rigid Tarim vs. eastern China moving eastward relative to Eurasia), which results in the development of thrust and extensional belts. These zones heterogeneously deform the wall-rock of the major strike-slip faults, causing the faults to stretch (an idea described by W.D. Means 1989 GEOLOGY). This effect is further enhanced by differential fault rotation, leading to more slip in the west, where the effect of India's indentation is more pronounced, than in the east. To investigate the feasibility of this model, we have examined geologic offsets, Quaternary fault slip rates, and GPS velocities, both from existing literature and our own observations. We compare offsets with the estimated shortening and extensional strain in the wall-rocks of the strike-slip faults. For example, if this model is valid, the slip on the eastern segment of the Haiyuan fault (i.e., ~25 km) should be compatible with shortening in the Liupan Shan and extension in the Yinchuan graben. We also present simple analogue model experiments to document the strain accumulated in bookshelf fault systems under different initial and boundary conditions (e.g., rigid vs. free vs. moving boundaries, heterogeneous or homogenous materials, variable strain rates). Comparing these experimentally derived strain distributions with those observed within the plateau can help elucidate which factors dominantly control regional deformation.
NASA Astrophysics Data System (ADS)
Wan, Yongge; Shen, Zheng-Kang; Bürgmann, Roland; Sun, Jianbao; Wang, Min
2017-02-01
We revisit the problem of coseismic rupture of the 2008 Mw7.9 Wenchuan earthquake. Precise determination of the fault structure and slip distribution provides critical information about the mechanical behaviour of the fault system and earthquake rupture. We use all the geodetic data available, craft a more realistic Earth structure and fault model compared to previous studies, and employ a nonlinear inversion scheme to optimally solve for the fault geometry and slip distribution. Compared to a homogeneous elastic half-space model and laterally uniform layered models, adopting separate layered elastic structure models on both sides of the Beichuan fault significantly improved data fitting. Our results reveal that: (1) The Beichuan fault is listric in shape, with near surface fault dip angles increasing from ˜36° at the southwest end to ˜83° at the northeast end of the rupture. (2) The fault rupture style changes from predominantly thrust at the southwest end to dextral at the northeast end of the fault rupture. (3) Fault slip peaks near the surface for most parts of the fault, with ˜8.4 m thrust and ˜5 m dextral slip near Hongkou and ˜6 m thrust and ˜8.4 m dextral slip near Beichuan, respectively. (4) The peak slips are located around fault geometric complexities, suggesting that earthquake style and rupture propagation were determined by fault zone geometric barriers. Such barriers exist primarily along restraining left stepping discontinuities of the dextral-compressional fault system. (5) The seismic moment released on the fault above 20 km depth is 8.2×1021 N m, corresponding to an Mw7.9 event. The seismic moments released on the local slip concentrations are equivalent to events of Mw7.5 at Yingxiu-Hongkou, Mw7.3 at Beichuan-Pingtong, Mw7.2 near Qingping, Mw7.1 near Qingchuan, and Mw6.7 near Nanba, respectively. (6) The fault geometry and kinematics are consistent with a model in which crustal deformation at the eastern margin of the Tibetan plateau is decoupled by differential motion across a decollement in the mid crust, above which deformation is dominated by brittle reverse faulting and below which deformation occurs by viscous horizontal shortening and vertical thickening.
NASA Astrophysics Data System (ADS)
Chinn, L.; Blythe, A. E.; Fendick, A.
2012-12-01
New apatite fission-track ages show varying rates of vertical exhumation at the eastern terminus of the Garlock fault zone. The Garlock fault zone is a 260 km long east-northeast striking strike-slip fault with as much as 64 km of sinistral offset. The Garlock fault zone terminates in the east in the Avawatz Mountains, at the intersection with the dextral Southern Death Valley fault zone. Although motion along the Garlock fault west of the Avawatz Mountains is considered purely strike-slip, uplift and exhumation of bedrock in the Avawatz Mountains south of the Garlock fault, as recently as 5 Ma, indicates that transpression plays an important role at this location and is perhaps related to a restricting bend as the fault wraps around and terminates southeastward along the Avawatz Mountains. In this study we complement extant thermochronometric ages from within the Avawatz core with new low temperature fission-track ages from samples collected within the adjacent Garlock and Southern Death Valley fault zones. These thermochronometric data indicate that vertical exhumation rates vary within the fault zone. Two Miocene ages (10.2 (+5.0/-3.4) Ma, 9.0 (+2.2/-1.8) Ma) indicate at least ~3.3 km of vertical exhumation at ~0.35 mm/yr, assuming a 30°C/km geothermal gradient, along a 2 km transect parallel and adjacent to the Mule Spring fault. An older Eocene age (42.9 (+8.7/-7.3) Ma) indicates ~3.3 km of vertical exhumation at ~0.08 mm/yr. These results are consistent with published exhumation rates of 0.35 mm/yr between ~7 and ~4 Ma and 0.13 mm/yr between ~15 and ~9 Ma, as determined by apatite fission-track and U-Th/He thermochronometry in the hanging-wall of the Mule Spring fault. Similar exhumation rates on both sides of the Mule Spring fault support three separate models: 1) Thrusting is no longer active along the Mule Spring fault, 2) Faulting is dominantly strike-slip at the sample locations, or 3) Miocene-present uplift and exhumation is below detection levels using apatite fission-track thermochronometry. In model #1 slip on the Mule Spring fault may have propagated towards the range front, and may be responsible for the fault-propagation-folding currently observed along the northern branch of the Southern Death Valley fault zone. Model #2 may serve to determine where faulting has historically included a component of thrust faulting to the east of sample locations. Model #3 would further determine total offset along the Mule Spring fault from Miocene-present. Anticipated fission-track and U-Th/He data will help distinguish between these alternative models.
NASA Astrophysics Data System (ADS)
Benesh, N. P.; Plesch, A.; Shaw, J. H.; Frost, E. K.
2007-03-01
Using the discrete element modeling method, we examine the two-dimensional nature of fold development above an anticlinal bend in a blind thrust fault. Our models were composed of numerical disks bonded together to form pregrowth strata overlying a fixed fault surface. This pregrowth package was then driven along the fault surface at a fixed velocity using a vertical backstop. Additionally, new particles were generated and deposited onto the pregrowth strata at a fixed rate to produce sequential growth layers. Models with and without mechanical layering were used, and the process of folding was analyzed in comparison with fold geometries predicted by kinematic fault bend folding as well as those observed in natural settings. Our results show that parallel fault bend folding behavior holds to first order in these models; however, a significant decrease in limb dip is noted for younger growth layers in all models. On the basis of comparisons to natural examples, we believe this deviation from kinematic fault bend folding to be a realistic feature of fold development resulting from an axial zone of finite width produced by materials with inherent mechanical strength. These results have important implications for how growth fold structures are used to constrain slip and paleoearthquake ages above blind thrust faults. Most notably, deformation localized about axial surfaces and structural relief across the fold limb seem to be the most robust observations that can readily constrain fault activity and slip. In contrast, fold limb width and shallow growth layer dips appear more variable and dependent on mechanical properties of the strata.
Learning in the model space for cognitive fault diagnosis.
Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin
2014-01-01
The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.
The Role of Coseismic Coulomb Stress Changes in Shaping the Hard Link Between Normal Fault Segments
NASA Astrophysics Data System (ADS)
Hodge, M.; Fagereng, Å.; Biggs, J.
2018-01-01
The mechanism and evolution of fault linkage is important in the growth and development of large faults. Here we investigate the role of coseismic stress changes in shaping the hard links between parallel normal fault segments (or faults), by comparing numerical models of the Coulomb stress change from simulated earthquakes on two en echelon fault segments to natural observations of hard-linked fault geometry. We consider three simplified linking fault geometries: (1) fault bend, (2) breached relay ramp, and (3) strike-slip transform fault. We consider scenarios where either one or both segments rupture and vary the distance between segment tips. Fault bends and breached relay ramps are favored where segments underlap or when the strike-perpendicular distance between overlapping segments is less than 20% of their total length, matching all 14 documented examples. Transform fault linkage geometries are preferred when overlapping segments are laterally offset at larger distances. Few transform faults exist in continental extensional settings, and our model suggests that propagating faults or fault segments may first link through fault bends or breached ramps before reaching sufficient overlap for a transform fault to develop. Our results suggest that Coulomb stresses arising from multisegment ruptures or repeated earthquakes are consistent with natural observations of the geometry of hard links between parallel normal fault segments.
NASA Astrophysics Data System (ADS)
Buijze, Loes; Guo, Yanhuang; Niemeijer, André R.; Ma, Shengli; Spiers, Christopher J.
2017-04-01
Faults in the upper crust cross-cut many different lithologies, which cause the composition of the fault rocks to vary. Each different fault rock segment may have specific mechanical properties, e.g. there may be stronger and weaker segments, and segments prone to unstable slip or creeping. This leads to heterogeneous deformation and stresses along such faults, and a heterogeneous distribution of seismic events. We address the influence of fault variability on stress, deformation, and seismicity using a combination of scaled laboratory fault and numerical modeling. A vertical fault was created along the diagonal of a 30 x 20 x 5 cm block of PMMA, along which a 2 mm thick gouge layer was deposited. Gouge materials of different characteristics were used to create various segments along the fault; quartz (average strength, stable sliding), kaolinite (weak, stable sliding), and gypsum (average strength, unstable sliding). The sample assembly was placed in a horizontal biaxial deformation apparatus, and shear displacement was enforced along the vertical fault. Multiple observations were made: 1) Acoustic emissions were continuously recorded at 3 MHz to observe the occurrence of stick-slips (micro-seismicity), 2) Photo-elastic effects (indicative of the differential stress) were recorded in the transparent set of PMMA wall-rocks using a high-speed camera, and 3) particle tracking was conducted on a speckle painted set of PMMA wall-rocks to study the deformation in the wall-rocks flanking the fault. All three observation methods show how the heterogeneous fault gouge exerts a strong control on the fault behavior. For example, a strong, unstable segment of gypsum flanked by two weaker kaolinite segments show strong stress concentrations develop near the edges of the strong segment, with at the same time most of acoustic emissions being located at the edge of this strong segment. The measurements of differential stress, strain and acoustic emissions provide a strong means to compare the scaled experiment to modeling results. In a finite-element model we reproduce the laboratory experiments, and compare the modeled stresses and strains to the observations and we compare the nucleation of seismic instability to the location of acoustic emissions. The model aids in understanding how the stresses and strains may vary as a result of fault heterogeneity, but also as a result of the boundary conditions inherent to a laboratory setup. The scaled experimental setup and modeling results also provide a means explain and compare with observations made at a larger scale, for example geodetic and seismological measurements along crustal scale faults.
NASA Astrophysics Data System (ADS)
Ries, William; Langridge, Robert; Villamor, Pilar; Litchfield, Nicola; Van Dissen, Russ; Townsend, Dougal; Lee, Julie; Heron, David; Lukovic, Biljana
2014-05-01
In New Zealand, we are currently reconciling multiple digital coverages of mapped active faults into a national coverage at a single scale (1:250,000). This seems at first glance to be a relatively simple task. However, methods used to capture data, the scale of capture, and the initial purpose of the fault mapping, has produced datasets that have very different characteristics. The New Zealand digital active fault database (AFDB) was initially developed as a way of managing active fault locations and fault-related features within a computer-based spatial framework. The data contained within the AFDB comes from a wide range of studies, from plate tectonic (1:500,000) to cadastral (1:2,000) scale. The database was designed to allow capture of field observations and remotely sourced data without a loss in data resolution. This approach has worked well as a method for compiling a centralised database for fault information but not for providing a complete national coverage at a single scale. During the last 15 years other complementary projects have used and also contributed data to the AFDB, most notably the QMAP project (a national series of geological maps completed over 19 years that include coverage of active and inactive faults at 1:250,000). AFDB linework and attributes was incorporated into this series but simplification of linework and attributes has occurred to maintain map clarity at 1:250,000 scale. Also, during this period on-going mapping of active faults has improved upon these data. Other projects of note that have used data from the AFDB include the National Seismic Hazard Model of New Zealand and the Global Earthquake Model (GEM). The main goal of the current project has been to provide the best digital spatial representation of a fault trace at 1:250,000 scale and combine this with the most up to date attributes. In some areas this has required a simplification of very fine detailed data and in some cases new mapping to provide a complete coverage. Where datasets have conflicting line work and/or attributes, data was reviewed through consultation with authors or review of published research to ensure the most to date representation was maintained. The current project aims to provide a coverage that will be consistent between the AFDB and QMAP digital and provide a free download of these data on the AFDB website (http://data.gns.cri.nz/af/).
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
Statistical tests of simple earthquake cycle models
NASA Astrophysics Data System (ADS)
DeVries, Phoebe M. R.; Evans, Eileen L.
2016-12-01
A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.
Wrench tectonics in Abu Dhabi, United Arab Emirates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, M.; Mohamed, A.S.
1995-08-01
Recent studies of the geodynamics and tectonic history of the Arabian plate throughout geologic time have revealed that Wrench forces played an important role in the structural generation and deformation of Petroleum basins and reservoirs of the United Arab Emirates. The tectonic analysis of Abu Dhabi revealed that basin facies evolution were controlled by wrench tectonics, examples are the Pre-Cambrian salt basin, the Permo-Triassic and Jurassic basins. In addition, several sedimentary patterns were strongly influenced by wrench tectonics, the Lower Cretaceous Shuaiba platform margin and associated reservoirs is a good example. Wrench faults, difficult to identify by conventional methods, weremore » examined from a regional perspective and through careful observation and assessment of many factors. Subsurface structural mapping and geoseismic cross-sections supported by outcrop studies and geomorphological features revealed a network of strike slip faults in Abu Dhabi. Structural modelling of these wench forces including the use of strain ellipses was applied both on regional and local scales. This effort has helped in reinterpreting some structural settings, some oil fields were interpreted as En Echelon buckle folds associated with NE/SW dextral wrench faults. Several flower structures were interpreted along NW/SE sinistral wrench faults which have significant hydrocarbon potential. Synthetic and Antithetic strike slip faults and associated fracture systems have played a significant role in field development and reservoir management studies. Four field examples were discussed.« less
Novel Directional Protection Scheme for the FREEDM Smart Grid System
NASA Astrophysics Data System (ADS)
Sharma, Nitish
This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.
NASA Technical Reports Server (NTRS)
Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.
1992-01-01
In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.
3D Model of the Neal Hot Springs Geothermal Area
Faulds, James E.
2013-12-31
The Neal Hot Springs geothermal system lies in a left-step in a north-striking, west-dipping normal fault system, consisting of the Neal Fault to the south and the Sugarloaf Butte Fault to the north (Edwards, 2013). The Neal Hot Springs 3D geologic model consists of 104 faults and 13 stratigraphic units. The stratigraphy is sub-horizontal to dipping <10 degrees and there is no predominant dip-direction. Geothermal production is exclusively from the Neal Fault south of, and within the step-over, while geothermal injection is into both the Neal Fault to the south of the step-over and faults within the step-over.
Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance
NASA Astrophysics Data System (ADS)
Wang, Jian; Yang, Zhenwei; Kang, Mei
2018-01-01
This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.
NASA Astrophysics Data System (ADS)
Nolan, S.; Jones, C. E.; Munro, R.; Norman, P.; Galloway, S.; Venturumilli, S.; Sheng, J.; Yuan, W.
2017-12-01
Hybrid electric propulsion aircraft are proposed to improve overall aircraft efficiency, enabling future rising demands for air travel to be met. The development of appropriate electrical power systems to provide thrust for the aircraft is a significant challenge due to the much higher required power generation capacity levels and complexity of the aero-electrical power systems (AEPS). The efficiency and weight of the AEPS is critical to ensure that the benefits of hybrid propulsion are not mitigated by the electrical power train. Hence it is proposed that for larger aircraft (~200 passengers) superconducting power systems are used to meet target power densities. Central to the design of the hybrid propulsion AEPS is a robust and reliable electrical protection and fault management system. It is known from previous studies that the choice of protection system may have a significant impact on the overall efficiency of the AEPS. Hence an informed design process which considers the key trades between choice of cable and protection requirements is needed. To date the fault response of a voltage source converter interfaced DC link rail to rail fault in a superconducting power system has only been investigated using simulation models validated by theoretical values from the literature. This paper will present the experimentally obtained fault response for a variety of different types of superconducting tape for a rail to rail DC fault. The paper will then use these as a platform to identify key trades between protection requirements and cable design, providing guidelines to enable future informed decisions to optimise hybrid propulsion electrical power system and protection design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Chen, Ting; Tan, Sirui
Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less
On the implementation of faults in finite-element glacial isostatic adjustment models
NASA Astrophysics Data System (ADS)
Steffen, Rebekka; Wu, Patrick; Steffen, Holger; Eaton, David W.
2014-01-01
Stresses induced in the crust and mantle by continental-scale ice sheets during glaciation have triggered earthquakes along pre-existing faults, commencing near the end of the deglaciation. In order to get a better understanding of the relationship between glacial loading/unloading and fault movement due to the spatio-temporal evolution of stresses, a commonly used model for glacial isostatic adjustment (GIA) is extended by including a fault structure. Solving this problem is enabled by development of a workflow involving three cascaded finite-element simulations. Each step has identical lithospheric and mantle structure and properties, but evolving stress conditions along the fault. The purpose of the first simulation is to compute the spatio-temporal evolution of rebound stress when the fault is tied together. An ice load with a parabolic profile and simple ice history is applied to represent glacial loading of the Laurentide Ice Sheet. The results of the first step describe the evolution of the stress and displacement induced by the rebound process. The second step in the procedure augments the results of the first, by computing the spatio-temporal evolution of total stress (i.e. rebound stress plus tectonic background stress and overburden pressure) and displacement with reaction forces that can hold the model in equilibrium. The background stress is estimated by assuming that the fault is in frictional equilibrium before glaciation. The third step simulates fault movement induced by the spatio-temporal evolution of total stress by evaluating fault stability in a subroutine. If the fault remains stable, no movement occurs; in case of fault instability, the fault displacement is computed. We show an example of fault motion along a 45°-dipping fault at the ice-sheet centre for a two-dimensional model. Stable conditions along the fault are found during glaciation and the initial part of deglaciation. Before deglaciation ends, the fault starts to move, and fault offsets of up to 22 m are obtained. A fault scarp at the surface of 19.74 m is determined. The fault is stable in the following time steps with a high stress accumulation at the fault tip. Along the upper part of the fault, GIA stresses are released in one earthquake.
The role of elasticity in simulating long-term tectonic extension
NASA Astrophysics Data System (ADS)
Olive, Jean-Arthur; Behn, Mark D.; Mittelstaedt, Eric; Ito, Garrett; Klein, Benjamin Z.
2016-05-01
While elasticity is a defining characteristic of the Earth's lithosphere, it is often ignored in numerical models of long-term tectonic processes in favour of a simpler viscoplastic description. Here we assess the consequences of this assumption on a well-studied geodynamic problem: the growth of normal faults at an extensional plate boundary. We conduct 2-D numerical simulations of extension in elastoplastic and viscoplastic layers using a finite difference, particle-in-cell numerical approach. Our models simulate a range of faulted layer thicknesses and extension rates, allowing us to quantify the role of elasticity on three key observables: fault-induced topography, fault rotation, and fault life span. In agreement with earlier studies, simulations carried out in elastoplastic layers produce rate-independent lithospheric flexure accompanied by rapid fault rotation and an inverse relationship between fault life span and faulted layer thickness. By contrast, models carried out with a viscoplastic lithosphere produce results that may qualitatively resemble the elastoplastic case, but depend strongly on the product of extension rate and layer viscosity U × ηL. When this product is high, fault growth initially generates little deformation of the footwall and hanging wall blocks, resulting in unrealistic, rigid block-offset in topography across the fault. This configuration progressively transitions into a regime where topographic decay associated with flexure is fully accommodated within the numerical domain. In addition, high U × ηL favours the sequential growth of multiple short-offset faults as opposed to a large-offset detachment. We interpret these results by comparing them to an analytical model for the fault-induced flexure of a thin viscous plate. The key to understanding the viscoplastic model results lies in the rate-dependence of the flexural wavelength of a viscous plate, and the strain rate dependence of the force increase associated with footwall and hanging wall bending. This behaviour produces unrealistic deformation patterns that can hinder the geological relevance of long-term rifting models that assume a viscoplastic rheology.
NASA Astrophysics Data System (ADS)
Riegel, H. B.; Zambrano, M.; Jablonska, D.; Emanuele, T.; Agosta, F.; Mattioni, L.; Rustichelli, A.
2017-12-01
The hydraulic properties of fault zones depend upon the individual contributions of the damage zone and the fault core. In the case of the damage zone, it is generally characterized by means of fracture analysis and modelling implementing multiple approaches, for instance the discrete fracture network model, the continuum model, and the channel network model. Conversely, the fault core is more difficult to characterize because it is normally composed of fine grain material generated by friction and wear. If the dimensions of the fault core allows it, the porosity and permeability are normally studied by means of laboratory analysis or in the other case by two dimensional microporosity analysis and in situ measurements of permeability (e.g. micro-permeameter). In this study, a combined approach consisting of fracture modeling, three-dimensional microporosity analysis, and computational fluid dynamics was applied to characterize the hydraulic properties of fault zones. The studied fault zones crosscut a well-cemented heterolithic succession (sandstone and mudstones) and may vary in terms of fault core thickness and composition, fracture properties, kinematics (normal or strike-slip), and displacement. These characteristics produce various splay and fault core behavior. The alternation of sandstone and mudstone layers is responsible for the concurrent occurrence of brittle (fractures) and ductile (clay smearing) deformation. When these alternating layers are faulted, they produce corresponding fault cores which act as conduits or barriers for fluid migration. When analyzing damage zones, accurate field and data acquisition and stochastic modeling was used to determine the hydraulic properties of the rock volume, in relation to the surrounding, undamaged host rock. In the fault cores, the three-dimensional pore network quantitative analysis based on X-ray microtomography images includes porosity, pore connectivity, and specific surface area. In addition, images were used to perform computational fluid simulation (Lattice-Boltzmann multi relaxation time method) and estimate the permeability. These results will be useful for understanding the deformation process and hydraulic properties across meter-scale damage zones.
An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems
NASA Astrophysics Data System (ADS)
Hieb, Jeffrey; Graham, James; Guan, Jian
This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.
NASA Astrophysics Data System (ADS)
Bezzeghoud, M.; Dimitro, D.; Ruegg, J. C.; Lammali, K.
1995-09-01
Since 1980, most of the papers published on the El Asnam earthquake concern the geological and seismological aspects of the fault zone. Only one paper, published by Ruegg et al. (1982), constrains the faulting mechanism with geodetic measurements. The purpose of this paper is to reexamine the faulting mechanism of the 1954 and 1980 events by modelling the associated vertical movements. For this purpose we used all available data, and particularly those of the levelling profiles along the Algiers-Oran railway that has been remeasured after each event. The comparison between 1905 and 1976 levelling data shows observed vertical displacement that could have been induced by the 1954 earthquake. On the basis of the 1954 and 1980 levelling data, we propose a possible model for the 1954 and 1980 fault systems. Our 1954 fault model is parallel to the 1980 main thrust fault, with an offset of 6 km towards the west. The 1980 dislocation model proposed in this study is based on a variable slip dislocation model and explains the observed surface break displacements given by Yielding et al. (1981). The Dewey (1991) and Avouac et al. (1992) models are compared with our dislocation model and discussed in this paper.
NASA Astrophysics Data System (ADS)
Donndorf, St.; Malz, A.; Kley, J.
2012-04-01
Cross section balancing is a generally accepted method for studying fault zone geometries. We show a method for the construction of structural 3D models of complex fault zones using a combination of gOcad modelling and balanced cross sections. In this work a 3D model of the Schlotheim graben in the Thuringian basin was created from serial, parallel cross sections and existing borehole data. The Thuringian Basin is originally a part of the North German Basin, which was separated from it by the Harz uplift in the Late Cretaceous. It comprises several parallel NW-trending inversion structures. The Schlotheim graben is one example of these inverted graben zones, whose structure poses special challenges to 3D modelling. The fault zone extends 30 km in NW-SE direction and 1 km in NE-SW direction. This project was split into two parts: data management and model building. To manage the fundamental data a central database was created in ESRI's ArcGIS. The development of a scripting interface handles the data exchange between the different steps of modelling. The first step is the pre-processing of the base data in ArcGIS, followed by cross section balancing with Midland Valley's Move software and finally the construction of the 3D model in Paradigm's gOcad. With the specific aim of constructing a 3D model based on cross sections, the functionality of the gOcad software had to be extended. These extensions include pre-processing functions to create a simplified and usable data base for gOcad as well as construction functions to create surfaces based on linearly distributed data and processing functions to create the 3D model from different surfaces. In order to use the model for further geological and hydrological simulations, special requirements apply to the surface properties. The first characteristic of the surfaces should be a quality mesh, which contains triangles with maximized internal angles. To achieve that, an external meshing tool was included in gOcad. The second characteristic is that intersecting lines between two surfaces must be included in both surfaces and share nodes with them. To finish the modelling process 3D balancing was performed to further improve the model quality.
Porosity variations in and around normal fault zones: implications for fault seal and geomechanics
NASA Astrophysics Data System (ADS)
Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra
2015-04-01
Porosity forms the building blocks for permeability, exerts a significant influence on the acoustic response of rocks to elastic waves, and fundamentally influences rock strength. And yet, published studies of porosity around fault zones or in faulted rock are relatively rare, and are hugely dominated by those of fault zone permeability. We present new data from detailed studies of porosity variations around normal faults in sandstone and limestone. We have developed an integrated approach to porosity characterisation in faulted rock exploiting different techniques to understand variations in the data. From systematic samples taken across exposed normal faults in limestone (Malta) and sandstone (Scotland), we combine digital image analysis on thin sections (optical and electron microscopy), core plug analysis (He porosimetry) and mercury injection capillary pressures (MICP). Our sampling includes representative material from undeformed protoliths and fault rocks from the footwall and hanging wall. Fault-related porosity can produce anisotropic permeability with a 'fast' direction parallel to the slip vector in a sandstone-hosted normal fault. Undeformed sandstones in the same unit exhibit maximum permeability in a sub-horizontal direction parallel to lamination in dune-bedded sandstones. Fault-related deformation produces anisotropic pores and pore networks with long axes aligned sub-vertically and this controls the permeability anisotropy, even under confining pressures up to 100 MPa. Fault-related porosity also has interesting consequences for the elastic properties and velocity structure of normal fault zones. Relationships between texture, pore type and acoustic velocity have been well documented in undeformed limestone. We have extended this work to include the effects of faulting on carbonate textures, pore types and P- and S-wave velocities (Vp, Vs) using a suite of normal fault zones in Malta, with displacements ranging from 0.5 to 90 m. Our results show a clear lithofacies control on the Vp-porosity and the Vs-Vp relationships for faulted limestones. Using porosity patterns quantified in naturally deformed rocks we have modelled their effect on the mechanical stability of fluid-saturated fault zones in the subsurface. Poroelasticity theory predicts that variations in fluid pressure could influence fault stability. Anisotropic patterns of porosity in and around fault zones can - depending on their orientation and intensity - lead to an increase in fault stability in response to a rise in fluid pressure, and a decrease in fault stability for a drop in fluid pressure. These predictions are the exact opposite of the accepted role of effective stress in fault stability. Our work has provided new data on the spatial and statistical variation of porosity in fault zones. Traditionally considered as an isotropic and scalar value, porosity and pore networks are better considered as anisotropic and as scale-dependent statistical distributions. The geological processes controlling the evolution of porosity are complex. Quantifying patterns of porosity variation is an essential first step in a wider quest to better understand deformation processes in and around normal fault zones. Understanding porosity patterns will help us to make more useful predictive tools for all agencies involved in the study and management of fluids in the subsurface.
Fault tolerant control of multivariable processes using auto-tuning PID controller.
Yu, Ding-Li; Chang, T K; Yu, Ding-Wen
2005-02-01
Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.
Spudich, P.; Guatteri, Mariagiovanna; Otsuki, K.; Minagawa, J.
1998-01-01
Dislocation models of the 1995 Hyogo-ken Nanbu (Kobe) earthquake derived by Yoshida et al. (1996) show substantial changes in direction of slip with time at specific points on the Nojima and Rokko fault systems, as do striations we observed on exposures of the Nojima fault surface on Awaji Island. Spudich (1992) showed that the initial stress, that is, the shear traction on the fault before the earthquake origin time, can be derived at points on the fault where the slip rake rotates with time if slip velocity and stress change are known at these points. From Yoshida's slip model, we calculated dynamic stress changes on the ruptured fault surfaces. To estimate errors, we compared the slip velocities and dynamic stress changes of several published models of the earthquake. The differences between these models had an exponential distribution, not gaussian. We developed a Bayesian method to estimate the probability density function (PDF) of initial stress from the striations and from Yoshida's slip model. Striations near Toshima and Hirabayashi give initial stresses of about 13 and 7 MPa, respectively. We obtained initial stresses of about 7 to 17 MPa at depths of 2 to 10 km on a subset of points on the Nojima and Rokko fault systems. Our initial stresses and coseismic stress changes agree well with postearthquake stresses measured by hydrofracturing in deep boreholes near Hirabayashi and Ogura on Awaji Island. Our results indicate that the Nojima fault slipped at very low shear stress, and fractional stress drop was complete near the surface and about 32% below depths of 2 km. Our results at depth depend on the accuracy of the rake rotations in Yoshida's model, which are probably correct on the Nojima fault but debatable on the Rokko fault. Our results imply that curved or cross-cutting fault striations can be formed in a single earthquake, contradicting a common assumption of structural geology.
Kinematics of the New Madrid seismic zone, central United States, based on stepover models
Pratt, Thomas L.
2012-01-01
Seismicity in the New Madrid seismic zone (NMSZ) of the central United States is generally attributed to a stepover structure in which the Reelfoot thrust fault transfers slip between parallel strike-slip faults. However, some arms of the seismic zone do not fit this simple model. Comparison of the NMSZ with an analog sandbox model of a restraining stepover structure explains all of the arms of seismicity as only part of the extensive pattern of faults that characterizes stepover structures. Computer models show that the stepover structure may form because differences in the trends of lower crustal shearing and inherited upper crustal faults make a step between en echelon fault segments the easiest path for slip in the upper crust. The models predict that the modern seismicity occurs only on a subset of the faults in the New Madrid stepover structure, that only the southern part of the stepover structure ruptured in the A.D. 1811–1812 earthquakes, and that the stepover formed because the trends of older faults are not the same as the current direction of shearing.
Forecast model for great earthquakes at the Nankai Trough subduction zone
Stuart, W.D.
1988-01-01
An earthquake instability model is formulated for recurring great earthquakes at the Nankai Trough subduction zone in southwest Japan. The model is quasistatic, two-dimensional, and has a displacement and velocity dependent constitutive law applied at the fault plane. A constant rate of fault slip at depth represents forcing due to relative motion of the Philippine Sea and Eurasian plates. The model simulates fault slip and stress for all parts of repeated earthquake cycles, including post-, inter-, pre- and coseismic stages. Calculated ground uplift is in agreement with most of the main features of elevation changes observed before and after the M=8.1 1946 Nankaido earthquake. In model simulations, accelerating fault slip has two time-scales. The first time-scale is several years long and is interpreted as an intermediate-term precursor. The second time-scale is a few days long and is interpreted as a short-term precursor. Accelerating fault slip on both time-scales causes anomalous elevation changes of the ground surface over the fault plane of 100 mm or less within 50 km of the fault trace. ?? 1988 Birkha??user Verlag.
NASA Technical Reports Server (NTRS)
Vitali, Roberto; Lutomski, Michael G.
2004-01-01
National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.
NASA Astrophysics Data System (ADS)
Bertrand, Lionel; Géraud, Yves; Diraison, Marc; Damy, Pierre-Clément
2017-04-01
The Scientific Interest Group (GIS) GEODENERGIES with the REFLET project aims to develop a geological and reservoir model for fault zones that are the main targets for deep geothermal prospects in the West European Rift system. In this project, several areas are studied with an integrated methodology combining field studies, boreholes and geophysical data acquisition and 3D modelling. In this study, we present the results of reservoir rock analogues characterization of one of these prospects in the Valence Graben (Eastern France). The approach used is a structural and petrophysical characterization of the rocks outcropping at the shoulders of the rift in order to model the buried targeted fault zone. The reservoir rocks are composed of fractured granites, gneiss and schists of the Hercynian basement of the graben. The matrix porosity, permeability, P-waves velocities and thermal conductivities have been characterized on hand samples coming from fault zones at the outcrop. Furthermore, fault organization has been mapped with the aim to identify the characteristic fault orientation, spacing and width. The fractures statistics like the orientation, density, and length have been identified in the damaged zones and unfaulted blocks regarding the regional fault pattern. All theses data have been included in a reservoir model with a double porosity model. The field study shows that the fault pattern in the outcrop area can be classified in different fault orders, with first order scale, larger faults distribution controls the first order structural and lithological organization. Between theses faults, the first order blocks are divided in second and third order faults, smaller structures, with characteristic spacing and width. Third order fault zones in granitic rocks show a significant porosity development in the fault cores until 25 % in the most locally altered material, as the damaged zones develop mostly fractures permeabilities. In the gneiss and schists units, the matrix porosity and permeability development is mainly controlled by microcrack density enhancement in the fault zone unlike the granite rocks were it is mostly mineral alteration. Due to the grain size much important in the gneiss, the opening of the cracks is higher than in the schist samples. Thus, the matrix permeability can be two orders higher in the gneiss than in the schists (until 10 mD for gneiss and 0,1 mD for schists for the same porosity around 5%). Combining the regional data with the fault pattern, the fracture and matrix porosity and permeability, we are able to construct a double-porosity model suitable for the prospected graben. This model, combined with seismic data acquisition is a predictable tool for flow modelling in the buried reservoir and helps the prediction of borehole targets and design in the graben.
NASA Astrophysics Data System (ADS)
Goto, J.; Moriya, T.; Yoshimura, K.; Tsuchi, H.; Karasaki, K.; Onishi, T.; Ueta, K.; Tanaka, S.; Kiho, K.
2010-12-01
The Nuclear Waste Management Organization of Japan (NUMO), in collaboration with Lawrence Berkeley National Laboratory (LBNL), has carried out a project to develop an efficient and practical methodology to characterize hydrologic property of faults since 2007, exclusively for the early stage of siting a deep underground repository. A preliminary flowchart of the characterization program and a classification scheme of fault hydrology based on the geological feature have been proposed. These have been tested through the field characterization program on the Wildcat Fault in Berkeley, California. The Wildcat Fault is a relatively large non-active strike-slip fault which is believed to be a subsidiary of the active Hayward Fault. Our classification scheme assumes the contrasting hydrologic features between the linear northern part and the split/spread southern part of the Wildcat Fault. The field characterization program to date has been concentrated in and around the LBNL site on the southern part of the fault. Several lines of electrical and reflection seismic surveys, and subsequent trench investigations, have revealed the approximate distribution and near-surface features of the Wildcat Fault (see also Onishi, et al. and Ueta, et al.). Three 150m deep boreholes, WF-1 to WF-3, have been drilled on a line normal to the trace of the fault in the LBNL site. Two vertical holes were placed to characterize the undisturbed Miocene sedimentary formations at the eastern and western sides of the fault (WF-1 and WF-2 respectively). WF-2 on the western side intersected the rock formation, which was expected only in WF-1, and several of various intensities. Therefore, WF-3, originally planned as inclined to penetrate the fault, was replaced by the vertical hole further to the west. It again encountered unexpected rocks and faults. Preliminary results of in-situ hydraulic tests suggested that the transmissivity of WF-1 is ten to one hundred times higher than WF-2. The monitoring of hydraulic pressure displayed different head distribution patterns between WF-1 and WF-2 (see also Karasaki, et al.). Based on these results, three hypotheses on the distribution of the Wildcat Fault were proposed: (a) a vertical fault in between WF-1 and WF-2, (b) a more gently dipping fault intersected in WF-2 and WF-3, and (c) a wide zone of faults extending between WF-1 and WF-3. At present, WF-4, an inclined hole to penetrate the possible (eastern?) master fault, is ongoing to test these hypotheses. After the WF-4 investigation, hydrologic and geochemical analyses and modeling of the southern part of the fault will be carried out. A simpler field characterization program will also be carried out in the northern part of the fault. Finally, all the results will be synthesized to improve the comprehensive methodology.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter D.; Dawson, Andrew
2017-03-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.
Coulomb Stress Accumulation along the San Andreas Fault System
NASA Technical Reports Server (NTRS)
Smith, Bridget; Sandwell, David
2003-01-01
Stress accumulation rates along the primary segments of the San Andreas Fault system are computed using a three-dimensional (3-D) elastic half-space model with realistic fault geometry. The model is developed in the Fourier domain by solving for the response of an elastic half-space due to a point vector body force and analytically integrating the force from a locking depth to infinite depth. This approach is then applied to the San Andreas Fault system using published slip rates along 18 major fault strands of the fault zone. GPS-derived horizontal velocity measurements spanning the entire 1700 x 200 km region are then used to solve for apparent locking depth along each primary fault segment. This simple model fits remarkably well (2.43 mm/yr RMS misfit), although some discrepancies occur in the Eastern California Shear Zone. The model also predicts vertical uplift and subsidence rates that are in agreement with independent geologic and geodetic estimates. In addition, shear and normal stresses along the major fault strands are used to compute Coulomb stress accumulation rate. As a result, we find earthquake recurrence intervals along the San Andreas Fault system to be inversely proportional to Coulomb stress accumulation rate, in agreement with typical coseismic stress drops of 1 - 10 MPa. This 3-D deformation model can ultimately be extended to include both time-dependent forcing and viscoelastic response.
NASA Astrophysics Data System (ADS)
Lien, Tzuyi; Cheng, Ching-Chung; Hwang, Cheinway; Crossley, David
2014-09-01
We develop a new hydrology and gravimetry-based method to assess whether or not a local fault may be active. We take advantage of an existing superconducting gravimeter (SG) station and a comprehensive groundwater network in Hsinchu to apply the method to the Hsinchu Fault (HF) across the Hsinchu Science Park, whose industrial output accounts for 10% of Taiwan's gross domestic product. The HF is suspected to pose seismic hazards to the park, but its existence and structure are not clear. The a priori geometry of the HF is translated into boundary conditions imposed in the hydrodynamic model. By varying the fault's location, depth, and including a secondary wrench fault, we construct five hydrodynamic models to estimate groundwater variations, which are evaluated by comparing groundwater levels and SG observations. The results reveal that the HF contains a low hydraulic conductivity core and significantly impacts groundwater flows in the aquifers. Imposing the fault boundary conditions leads to about 63-77% reduction in the differences between modeled and observed values (both water level and gravity). The test with fault depth shows that the HF's most recent slip occurred in the beginning of Holocene, supplying a necessary (but not sufficient) condition that the HF is currently active. A portable SG can act as a virtual borehole well for model assessment at critical locations of a suspected active fault.
Effects induced by an earthquake on its fault plane:a boundary element study
NASA Astrophysics Data System (ADS)
Bonafede, Maurizio; Neri, Andrea
2000-04-01
Mechanical effects left by a model earthquake on its fault plane, in the post-seismic phase, are investigated employing the `displacement discontinuity method'. Simple crack models, characterized by the release of a constant, unidirectional shear traction are investigated first. Both slip components-parallel and normal to the traction direction-are found to be non-vanishing and to depend on fault depth, dip, aspect ratio and fault plane geometry. The rake of the slip vector is similarly found to depend on depth and dip. The fault plane is found to suffer some small rotation and bending, which may be responsible for the indentation of a transform tectonic margin, particularly if cumulative effects are considered. Very significant normal stress components are left over the shallow portion of the fault surface after an earthquake: these are tensile for thrust faults, compressive for normal faults and are typically comparable in size to the stress drop. These normal stresses can easily be computed for more realistic seismic source models, in which a variable slip is assigned; normal stresses are induced in these cases too, and positive shear stresses may even be induced on the fault plane in regions of high slip gradient. Several observations can be explained from the present model: low-dip thrust faults and high-dip normal faults are found to be facilitated, according to the Coulomb failure criterion, in repetitive earthquake cycles; the shape of dip-slip faults near the surface is predicted to be upward-concave; and the shallower aftershock activity generally found in the hanging block of a thrust event can be explained by `unclamping' mechanisms.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-En; Huang, Wen-Jeng; Chang, Ping-Yu; Lo, Wei
2016-04-01
An unmanned aerial vehicle (UAV) with a digital camera is an efficient tool for geologists to investigate structure patterns in the field. By setting ground control points (GCPs), UAV-based photogrammetry provides high-quality and quantitative results such as a digital surface model (DSM) and orthomosaic and elevational images. We combine the elevational outcrop 3D model and a digital surface model together to analyze the structural characteristics of Sanyi active fault in Houli-Fengyuan area, western Taiwan. Furthermore, we collect resistivity survey profiles and drilling core data in the Fengyuan District in order to build the subsurface fault geometry. The ground sample distance (GSD) of an elevational outcrop 3D model is 3.64 cm/pixel in this study. Our preliminary result shows that 5 fault branches are distributed 500 meters wide on the elevational outcrop and the width of Sanyi fault zone is likely much great than this value. Together with our field observations, we propose a structural evolution model to demonstrate how the 5 fault branches developed. The resistivity survey profiles show that Holocene gravel was disturbed by the Sanyi fault in Fengyuan area.
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
System Modeling and Diagnostics for Liquefying-Fuel Hybrid Rockets
NASA Technical Reports Server (NTRS)
Poll, Scott; Iverson, David; Ou, Jeremy; Sanderfer, Dwight; Patterson-Hine, Ann
2003-01-01
A Hybrid Combustion Facility (HCF) was recently built at NASA Ames Research Center to study the combustion properties of a new fuel formulation that burns approximately three times faster than conventional hybrid fuels. Researchers at Ames working in the area of Integrated Vehicle Health Management recognized a good opportunity to apply IVHM techniques to a candidate technology for next generation launch systems. Five tools were selected to examine various IVHM techniques for the HCF. Three of the tools, TEAMS (Testability Engineering and Maintenance System), L2 (Livingstone2), and RODON, are model-based reasoning (or diagnostic) systems. Two other tools in this study, ICS (Interval Constraint Simulator) and IMS (Inductive Monitoring System) do not attempt to isolate the cause of the failure but may be used for fault detection. Models of varying scope and completeness were created, both qualitative and quantitative. In each of the models, the structure and behavior of the physical system are captured. In the qualitative models, the temporal aspects of the system behavior and the abstraction of sensor data are handled outside of the model and require the development of additional code. In the quantitative model, less extensive processing code is also necessary. Examples of fault diagnoses are given.
Material and Stress Rotations: Anticipating the 1992 Landers, CA Earthquake
NASA Astrophysics Data System (ADS)
Nur, A. M.
2014-12-01
"Rotations make nonsense of the two-dimensional reconstructions that are still so popular among structural geologists". (McKenzie, 1990, p. 109-110) I present a comprehensive tectonic model for the strike-slip fault geometry, seismicity, material rotation, and stress rotation, in which new, optimally oriented faults can form when older ones have rotated about a vertical axis out of favorable orientations. The model was successfully tested in the Mojave region using stress rotation and three independent data sets: the alignment of epicenters and fault plane solutions from the six largest central Mojave earthquakes since 1947, material rotations inferred from paleomagnetic declination anomalies, and rotated dike strands of the Independence dike swarm. The model led not only to the anticipation of the 1992 M7.3 Landers, CA earthquake but also accounts for the great complexity of the faulting and seismicity of this event. The implication of this model for crustal deformation in general is that rotations of material (faults and the blocks between them) and of stress provide the key link between the complexity of faults systems in-situ and idealized mechanical theory of faulting. Excluding rotations from the kinematical and mechanical analysis of crustal deformation makes it impossible to explain the complexity of what geologists see in faults, or what seismicity shows us about active faults. However, when we allow for rotation of material and stress, Coulomb's law becomes consistent with the complexity of faults and faulting observed in situ.
Achieving Agreement in Three Rounds with Bounded-Byzantine Faults
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar, R.
2017-01-01
A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.
Length-Displacement Scaling of Lunar Thrust Faults and the Formation of Uphill-Facing Scarps
NASA Astrophysics Data System (ADS)
Hiesinger, Harald; Roggon, Lars; Hetzel, Ralf; Clark, Jaclyn D.; Hampel, Andrea; van der Bogert, Carolyn H.
2017-04-01
Lobate scarps are straight to curvilinear positive-relief landforms that occur on all terrestrial bodies [e.g., 1-3]. They are the surface manifestation of thrust faults that cut through and offset the upper part of the crust. Fault scarps on planetary surfaces provide the opportunity to study the growth of faults under a wide range of environmental conditions (e.g., gravity, temperature, pore pressure) [4]. We studied four lunar thrust-fault scarps (Simpelius-1, Morozov (S1), Fowler, Racah X-1) ranging in length from 1.3 km to 15.4 km [5] and found that their maximum total displacements are linearly correlated with length over one order of magnitude. We propose that during the progressive accumulation of slip, lunar faults propagate laterally and increase in length. On the basis of our measurements, the ratio of maximum displacement, D, to fault length, L, ranges from 0.017 to 0.028 with a mean value of 0.023 (or 2.3%). This is an order of magnitude higher than the value of 0.1% derived by theoretical considerations [4], and about twice as large as the value of 0.012-0.013 estimated by [6,7]. Our results, in addition to recently published findings for other lunar scarps [2,8], indicate that the D/L ratios of lunar thrust faults are similar to those of faults on Mercury and Mars (e.g., 1, 9-11], and almost as high as the average D/L ratio of 3% for faults on Earth [16,23]. Three of the investigated thrust fault scarps (Simpelius-1, Morozov (S1), Fowler) are uphill-facing scarps generated by slip on faults that dip in the same direction as the local topography. Thrust faults with such a geometry are common ( 60% of 97 studied scarps) on the Moon [e.g., 2,5,7]. To test our hypothesis that the surface topography plays an important role in the formation of uphill-facing fault scarps by controlling the vertical load on a fault plane, we simulated thrust faulting and its relation to topography with two-dimensional finite-element models using the commercial code ABAQUS (version 6.14). Our model results indicate that the onset of faulting in our 200-km-long model is a function of the surface topography [5]. Our numerical model indicates that uphill-facing scarps form earlier and grow faster than downhill-facing scarps under otherwise similar conditions. Thrust faults which dip in the same general direction as the topography (forming an uphill-facing scarp), start to slip earlier (4.2 Ma) after the onset of shortening and reach a total slip of 5.8 m after 70 Ma. In contrast, slip on faults that leads to the generation of a downhill-facing scarp initiates much later (i.e., after 20 Ma of elapsed model time) and attains a total slip of only 1.8 m in 70 Ma. If the surface of the model is horizontal, faulting on both fault structures starts after 4.4 Ma, but faulting proceeds at a lower rate than for fault, which generated the uphill-facing scarp. Although the absolute ages for fault initiation (as well as the total fault slip) depend on the arbitrarily chosen shortening rate (as well as on the size of the model and the elastic parameters), this relative timing of fault activation was consistently observed irrespective of the chosen shortening rate. Thus, the model results demonstrate that, for all other factors being equal, the differing weight of the hanging wall above the two modeled faults is responsible for the different timing of fault initiation and the difference in total slip. In conclusion, we present new quantitative estimates of the maximum total displacements of lunar lobate scarps and offer a new model to explain the origin of uphill-facing scarps that is also of importance for understanding the formation of the Lee-Lincoln scarp at the Apollo 17 landing site. [1] Watters et al., 2000, Geophys. Res. Lett. 27; [2] Williams et al., 2013, J. Geophys. Res. 118; [3] Massironi et al., 2015, Encycl. Planet. Landf., pp. 1255-1262; [4] Schultz et al., 2006, J. Struct. Geol. 28; [5] Roggon et al. (2017) Icarus, in press; [6] Watters and Johnson, 2010, Planetary Tectonics, pp. 121-182; [7] Banks et al., 2012, J. Geophys. Res. 117; [8] Banks et al., 2013, LPSC 44, 3042; [9] Hauber and Kronberg, 2005, J. Geophys. Res. 110; [10] Hauber et al., 2013, EPSC2013-987; [11] Byrne et al., 2014, Nature Geosci. 7
Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results
NASA Astrophysics Data System (ADS)
Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.
2013-12-01
Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by changes to model parameters such as shear and normal stress, rate-and-state frictional properties, fault geometry, and slip rate.
Vibration signal models for fault diagnosis of planet bearings
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.
2016-05-01
Rolling element bearings are key components of planetary gearboxes. Among them, the motion of planet bearings is very complex, encompassing spinning and revolution. Therefore, planet bearing vibrations are highly intricate and their fault characteristics are completely different from those of fixed-axis case, making planet bearing fault diagnosis a difficult topic. In order to address this issue, we derive the explicit equations for calculating the characteristic frequency of outer race, rolling element and inner race fault, considering the complex motion of planet bearings. We also develop the planet bearing vibration signal model for each fault case, considering the modulation effects of load zone passing, time-varying angle between the gear pair mesh and fault induced impact force, as well as the time-varying vibration transfer path. Based on the developed signal models, we derive the explicit equations of Fourier spectrum in each fault case, and summarize the vibration spectral characteristics respectively. The theoretical derivations are illustrated by numerical simulation, and further validated experimentally and all the three fault cases (i.e. outer race, rolling element and inner race localized fault) are diagnosed.
A fault injection experiment using the AIRLAB Diagnostic Emulation Facility
NASA Technical Reports Server (NTRS)
Baker, Robert; Mangum, Scott; Scheper, Charlotte
1988-01-01
The preparation for, conduct of, and results of a simulation based fault injection experiment conducted using the AIRLAB Diagnostic Emulation facilities is described. An objective of this experiment was to determine the effectiveness of the diagnostic self-test sequences used to uncover latent faults in a logic network providing the key fault tolerance features for a flight control computer. Another objective was to develop methods, tools, and techniques for conducting the experiment. More than 1600 faults were injected into a logic gate level model of the Data Communicator/Interstage (C/I). For each fault injected, diagnostic self-test sequences consisting of over 300 test vectors were supplied to the C/I model as inputs. For each test vector within a test sequence, the outputs from the C/I model were compared to the outputs of a fault free C/I. If the outputs differed, the fault was considered detectable for the given test vector. These results were then analyzed to determine the effectiveness of some test sequences. The results established coverage of selt-test diagnostics, identified areas in the C/I logic where the tests did not locate faults, and suggest fault latency reduction opportunities.
An approach to secure weather and climate models against hardware faults
NASA Astrophysics Data System (ADS)
Düben, Peter; Dawson, Andrew
2017-04-01
Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.
Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Engelmann, Christian
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on powermore » consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.« less
Evolution of Pull-Apart Basins and Their Scale Independence
NASA Astrophysics Data System (ADS)
Aydin, Atilla; Nur, Amos
1982-02-01
Pull-apart basins or rhomb grabens and horsts along major strike-slip fault systems in the world are generally associated with horizontal slip along faults. A simple model suggests that the width of the rhombs is controlled by the initial fault geometry, whereas the length increases with increasing fault displacement. We have tested this model by analyzing the shapes of 70 well-defined rhomb-like pull-apart basins and pressure ridges, ranging from tens of meters to tens of kilometers in length, associated with several major strike-slip faults in the western United States, Israel, Turkey, Iran, Guatemala, Venezuela, and New Zealand. In conflict with the model, we find that the length to width ratio of these basins is a constant value of approximately 3; these basins become wider as they grow longer with increasing fault offset. Two possible mechanisms responsible for the increase in width are suggested: (1) coalescence of neighboring rhomb grabens as each graben increases its length and (2) formation of fault strands parallel to the existing ones when large displacements need to be accommodated. The processes of formation and growth of new fault strands promote interaction among the new faults and between the new and preexisting faults on a larger scale. Increased displacement causes the width of the fault zone to increase resulting in wider pull-apart basins.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2017-01-01
As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASAs Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this database for adaptive, risk-informed FM assurance that critical software systems will safely and securely protect against faults and respond to ACs in order to achieve successful missions.
Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2017-01-01
As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this database for adaptive, risk-informed FM assurance that critical software systems will safely and securely protect against faults and respond to ACs in order to achieve successful missions.
NASA Astrophysics Data System (ADS)
Clausen, O. R.; Egholm, D. L.; Wesenberg, R.
2012-04-01
Salt deformation has been the topic of numerous studies through the 20th century and up until present because of the close relation between commercial hydrocarbons and salt structure provinces of the world (Hudec & Jackson, 2007). The fault distribution in sediments above salt structures influences among other things the productivity due to the segmentation of the reservoir (Stewart 2006). 3D seismic data above salt structures can map such fault patterns in great detail and studies have shown that a variety of fault patterns exists. Yet, most patterns fall between two end members: concentric and radiating fault patterns. Here we use a modified version of the numerical spring-slider model introduced by Malthe-Sørenssen et al.(1998a) for simulating the emergence of small scale faults and fractures above a rising salt structure. The three-dimensional spring-slider model enables us to control the rheology of the deforming overburden, the mechanical coupling between the overburden and the underlying salt, as well as the kinematics of the moving salt structure. In this presentation, we demonstrate how the horizontal component on the salt motion influences the fracture patterns within the overburden. The modeling shows that purely vertical movement of the salt introduces a mesh of concentric normal faults in the overburden, and that the frequency of radiating faults increases with the amount of lateral movements across the salt-overburden interface. The two end-member fault patterns (concentric vs. radiating) can thus be linked to two different styles of salt movement: i) the vertical rising of a salt indenter and ii) the inflation of a 'salt-balloon' beneath the deformed strata. The results are in accordance with published analogue and theoretical models, as well as natural systems, and the model may - when used appropriately - provide new insight into how the internal dynamics of the salt in a structure controls the generation of fault patterns above the structure. The model is thus an important contribution to the understanding of small-scale faults, which may be unresolved by seismic data when the hydrocarbon production from reservoirs located above salt structures is optimized.
NASA Astrophysics Data System (ADS)
Daout, S.; Jolivet, R.; Lasserre, C.; Doin, M.-P.; Barbot, S.; Tapponnier, P.; Peltzer, G.; Socquet, A.; Sun, J.
2016-04-01
Oblique convergence across Tibet leads to slip partitioning with the coexistence of strike-slip, normal and thrust motion on major fault systems. A key point is to understand and model how faults interact and accumulate strain at depth. Here, we extract ground deformation across the Haiyuan Fault restraining bend, at the northeastern boundary of the Tibetan plateau, from Envisat radar data spanning the 2001-2011 period. We show that the complexity of the surface displacement field can be explained by the partitioning of a uniform deep-seated convergence. Mountains and sand dunes in the study area make the radar data processing challenging and require the latest developments in processing procedures for Synthetic Aperture Radar interferometry. The processing strategy is based on a small baseline approach. Before unwrapping, we correct for atmospheric phase delays from global atmospheric models and digital elevation model errors. A series of filtering steps is applied to improve the signal-to-noise ratio across high ranges of the Tibetan plateau and the phase unwrapping capability across the fault, required for reliable estimate of fault movement. We then jointly invert our InSAR time-series together with published GPS displacements to test a proposed long-term slip-partitioning model between the Haiyuan and Gulang left-lateral Faults and the Qilian Shan thrusts. We explore the geometry of the fault system at depth and associated slip rates using a Bayesian approach and test the consistency of present-day geodetic surface displacements with a long-term tectonic model. We determine a uniform convergence rate of 10 [8.6-11.5] mm yr-1 with an N89 [81-97]°E across the whole fault system, with a variable partitioning west and east of a major extensional fault-jog (the Tianzhu pull-apart basin). Our 2-D model of two profiles perpendicular to the fault system gives a quantitative understanding of how crustal deformation is accommodated by the various branches of this thrust/strike-slip fault system and demonstrates how the geometry of the Haiyuan fault system controls the partitioning of the deep secular motion.
NASA Astrophysics Data System (ADS)
Chheda, T. D.; Nevitt, J. M.; Pollard, D. D.
2014-12-01
The formation of monoclinal right-lateral kink bands in Lake Edison granodiorite (central Sierra Nevada, CA) is investigated through field observations and mechanics based numerical modeling. Vertical faults act as weak surfaces within the granodiorite, and vertical granodiorite slabs bounded by closely-spaced faults curve into a kink. Leucocratic dikes are observed in association with kinking. Measurements were made on maps of Hilgard, Waterfall, Trail Fork, Kip Camp (Pollard and Segall, 1983b) and Bear Creek kink bands (Martel, 1998). Outcrop scale geometric parameters such as fault length andspacing, kink angle, and dike width are used to construct a representative geometry to be used in a finite element model. Three orders of fault were classified, length = 1.8, 7.2 and 28.8 m, and spacing = 0.3, 1.2 and 3.6 m, respectively. The model faults are oriented at 25° to the direction of shortening (horizontal most compressive stress), consistent with measurements of wing crack orientations in the field area. The model also includes a vertical leucocratic dike, oriented perpendicular to the faults and with material properties consistent with aplite. Curvature of the deformed faults across the kink band was used to compare the effects of material properties, strain, and fault and dike geometry. Model results indicate that the presence of the dike, which provides a mechanical heterogeneity, is critical to kinking in these rocks. Keeping properties of the model granodiorite constant, curvature increased with decrease in yield strength and Young's modulus of the dike. Curvature increased significantly as yield strength decreased from 95 to 90 MPa, and below this threshold value, limb rotation for the kink band was restricted to the dike. Changing Poisson's ratio had no significant effect. The addition of small faults between bounding faults, decreasing fault spacing or increasing dike width increases the curvature. Increasing friction along the faults decreases slip, so the shortening is accommodated by more kinking. Analysis of these parameters also gives us an insight concerning the kilometer-scale kink band in the Mount Abbot Quadrangle, where the Rosy Finch Shear Zone may provide the mechanical heterogeneity that is necessary to cause kinking.
NASA Astrophysics Data System (ADS)
Bhattacharya, P.; Viesca, R. C.
2017-12-01
In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent bounds on in-situ fault properties like permeability, storage coefficient, resolved stresses, friction and the shear modulus, our results also show that fitting the complete observed time history of slip requires alternative model considerations, such as variations in fault mechanical properties or friction coefficient with slip.
Fault Management Design Strategies
NASA Technical Reports Server (NTRS)
Day, John C.; Johnson, Stephen B.
2014-01-01
Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.
Ryan, H.F.; Parsons, T.; Sliter, R.W.
2008-01-01
A new fault map of the shelf offshore of San Francisco, California shows that faulting occurs as a distributed shear zone that involves many fault strands with the principal displacement taken up by the San Andreas fault and the eastern strand of the San Gregorio fault zone. Structures associated with the offshore faulting show compressive deformation near where the San Andreas fault goes offshore, but deformation becomes extensional several km to the north off of the Golden Gate. Our new fault map serves as the basis for a 3-D finite element model that shows that the block between the San Andreas and San Gregorio fault zone is subsiding at a long-term rate of about 0.2-0.3??mm/yr, with the maximum subsidence occurring northwest of the Golden Gate in the area of a mapped transtensional basin. Although the long-term rates of vertical displacement primarily show subsidence, the model of coseismic deformation associated with the 1906 San Francisco earthquake indicates that uplift on the order of 10-15??cm occurred in the block northeast of the San Andreas fault. Since 1906, 5-6??cm of regional subsidence has occurred in that block. One implication of our model is that the transfer of slip from the San Andreas fault to a fault 5??km to the east, the Golden Gate fault, is not required for the area offshore of San Francisco to be in extension. This has implications for both the deposition of thick Pliocene-Pleistocene sediments (the Merced Formation) observed east of the San Andreas fault, and the age of the Peninsula segment of the San Andreas fault.
A phased approach to induced seismicity risk management
White, Joshua A.; Foxall, William
2014-01-01
This work describes strategies for assessing and managing induced seismicity risk during each phase of a carbon storage project. We consider both nuisance and damage potential from induced earthquakes, as well as the indirect risk of enhancing fault leakage pathways. A phased approach to seismicity management is proposed, in which operations are continuously adapted based on available information and an on-going estimate of risk. At each project stage, specific recommendations are made for (a) monitoring and characterization, (b) modeling and analysis, and (c) site operations. The resulting methodology can help lower seismic risk while ensuring site operations remain practical andmore » cost-effective.« less
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
Geomechanical Modeling for Improved CO2 Storage Security
NASA Astrophysics Data System (ADS)
Rutqvist, J.; Rinaldi, A. P.; Cappa, F.; Jeanne, P.; Mazzoldi, A.; Urpi, L.; Vilarrasa, V.; Guglielmi, Y.
2017-12-01
This presentation summarizes recent modeling studies on geomechanical aspects related to Geologic Carbon Sequestration (GCS,) including modeling potential fault reactivation, seismicity and CO2 leakage. The model simulations demonstrates that the potential for fault reactivation and the resulting seismic magnitude as well as the potential for creating a leakage path through overburden sealing layers (caprock) depends on a number of parameters such as fault orientation, stress field, and rock properties. The model simulations further demonstrate that seismic events large enough to be felt by humans requires brittle fault properties as well as continuous fault permeability allowing for the pressure to be distributed over a large fault patch to be ruptured at once. Heterogeneous fault properties, which are commonly encountered in faults intersecting multilayered shale/sandstone sequences, effectively reduce the likelihood of inducing felt seismicity and also effectively impede upward CO2 leakage. Site specific model simulations of the In Salah CO2 storage site showed that deep fractured zone responses and associated seismicity occurred in the brittle fractured sandstone reservoir, but at a very substantial reservoir overpressure close to the magnitude of the least principal stress. It is suggested that coupled geomechanical modeling be used to guide the site selection and assisting in identification of locations most prone to unwanted and damaging geomechanical changes, and to evaluate potential consequence of such unwanted geomechanical changes. The geomechanical modeling can be used to better estimate the maximum sustainable injection rate or reservoir pressure and thereby provide for improved CO2 storage security. Whether damaging geomechanical changes could actually occur very much depends on the local stress field and local reservoir properties such the presence of ductile rock and faults (which can aseismically accommodate for the stress and strain induced by the injection) or, on the contrary, the presence of more brittle faults that, if critically stressed for shear, might be more prone to induce felt seismicity.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
NASA Astrophysics Data System (ADS)
Thomas, Marion Y.; Bhat, Harsha S.
2018-05-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
NASA Astrophysics Data System (ADS)
Thomas, M. Y.; Bhat, H. S.
2017-12-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
How do horizontal, frictional discontinuities affect reverse fault-propagation folding?
NASA Astrophysics Data System (ADS)
Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio
2017-09-01
The development of new reverse faults and related folds is strongly controlled by the mechanical characteristics of the host rocks. In this study we analyze the impact of a specific kind of anisotropy, i.e. thin mechanical and frictional discontinuities, in affecting the development of reverse faults and of the associated folds using physical scaled models. We perform analog modeling introducing one or two initially horizontal, thin discontinuities above an initially blind fault dipping at 30° in one case, and 45° in another, and then compare the results with those obtained from a fully isotropic model. The experimental results show that the occurrence of thin discontinuities affects both the development and the propagation of new faults and the shape of the associated folds. New faults 1) accelerate or decelerate their propagation depending on the location of the tips with respect to the discontinuities, 2) cross the discontinuities at a characteristic angle (∼90°), and 3) produce folds with different shapes, resulting not only from the dip of the new faults but also from their non-linear propagation history. Our results may have direct impact on future kinematic models, especially those aimed to reconstruct the tectonic history of faults that developed in layered rocks or in regions affected by pre-existing faults.
Damage Propagation Modeling for Aircraft Engine Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil
2008-01-01
This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.
Development of an On-board Failure Diagnostics and Prognostics System for Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Luchinsky, Dmitry G.; Osipov, Vyatcheslav V.; Timucin, Dogan A.; Uckun, Serdar
2009-01-01
We develop a case breach model for the on-board fault diagnostics and prognostics system for subscale solid-rocket boosters (SRBs). The model development was motivated by recent ground firing tests, in which a deviation of measured time-traces from the predicted time-series was observed. A modified model takes into account the nozzle ablation, including the effect of roughness of the nozzle surface, the geometry of the fault, and erosion and burning of the walls of the hole in the metal case. The derived low-dimensional performance model (LDPM) of the fault can reproduce the observed time-series data very well. To verify the performance of the LDPM we build a FLUENT model of the case breach fault and demonstrate a good agreement between theoretical predictions based on the analytical solution of the model equations and the results of the FLUENT simulations. We then incorporate the derived LDPM into an inferential Bayesian framework and verify performance of the Bayesian algorithm for the diagnostics and prognostics of the case breach fault. It is shown that the obtained LDPM allows one to track parameters of the SRB during the flight in real time, to diagnose case breach fault, and to predict its values in the future. The application of the method to fault diagnostics and prognostics (FD&P) of other SRB faults modes is discussed.
Modeling of a latent fault detector in a digital system
NASA Technical Reports Server (NTRS)
Nagel, P. M.
1978-01-01
Methods of modeling the detection time or latency period of a hardware fault in a digital system are proposed that explain how a computer detects faults in a computational mode. The objectives were to study how software reacts to a fault, to account for as many variables as possible affecting detection and to forecast a given program's detecting ability prior to computation. A series of experiments were conducted on a small emulated microprocessor with fault injection capability. Results indicate that the detecting capability of a program largely depends on the instruction subset used during computation and the frequency of its use and has little direct dependence on such variables as fault mode, number set, degree of branching and program length. A model is discussed which employs an analog with balls in an urn to explain the rate of which subsequent repetitions of an instruction or instruction set detect a given fault.
Graph-based real-time fault diagnostics
NASA Technical Reports Server (NTRS)
Padalkar, S.; Karsai, G.; Sztipanovits, J.
1988-01-01
A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.
The emergence of asymmetric normal fault systems under symmetric boundary conditions
NASA Astrophysics Data System (ADS)
Schöpfer, Martin P. J.; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Nicol, Andrew; Grasemann, Bernhard
2017-11-01
Many normal fault systems and, on a smaller scale, fracture boudinage often exhibit asymmetry with one fault dip direction dominating. It is a common belief that the formation of domino and shear band boudinage with a monoclinic symmetry requires a component of layer parallel shearing. Moreover, domains of parallel faults are frequently used to infer the presence of a décollement. Using Distinct Element Method (DEM) modelling we show, that asymmetric fault systems can emerge under symmetric boundary conditions. A statistical analysis of DEM models suggests that the fault dip directions and system polarities can be explained using a random process if the strength contrast between the brittle layer and the surrounding material is high. The models indicate that domino and shear band boudinage are unreliable shear-sense indicators. Moreover, the presence of a décollement should not be inferred on the basis of a domain of parallel faults alone.
NASA Astrophysics Data System (ADS)
Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien
2017-10-01
Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north-south extension. Modeling results show that geological, seismological and paleoseismological rates of earthquakes cannot be reconciled with only single-fault-rupture scenarios and require hypothesizing a large spectrum of possible FtF rupture sets. In order to fit the imposed regional Gutenberg-Richter (GR) MFD target, some of the slip along certain faults needs to be accommodated either with interseismic creep or as post-seismic processes. Furthermore, computed individual faults' MFDs differ depending on the position of each fault in the system and the possible FtF ruptures associated with the fault. Finally, a comparison of modeled earthquake rupture rates with those deduced from the regional and local earthquake catalog statistics and local paleoseismological data indicates a better fit with the FtF rupture set constructed with a distance criteria based on 5 km rather than 3 km, suggesting a high connectivity of faults in the WCR fault system.
Fault detection of Tennessee Eastman process based on topological features and SVM
NASA Astrophysics Data System (ADS)
Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen
2018-03-01
Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.
Kinematics of shallow backthrusts in the Seattle fault zone, Washington State
Pratt, Thomas L.; Troost, K.G.; Odum, Jackson K.; Stephenson, William J.
2015-01-01
Near-surface thrust fault splays and antithetic backthrusts at the tips of major thrust fault systems can distribute slip across multiple shallow fault strands, complicating earthquake hazard analyses based on studies of surface faulting. The shallow expression of the fault strands forming the Seattle fault zone of Washington State shows the structural relationships and interactions between such fault strands. Paleoseismic studies document an ∼7000 yr history of earthquakes on multiple faults within the Seattle fault zone, with some backthrusts inferred to rupture in small (M ∼5.5–6.0) earthquakes at times other than during earthquakes on the main thrust faults. We interpret seismic-reflection profiles to show three main thrust faults, one of which is a blind thrust fault directly beneath downtown Seattle, and four small backthrusts within the Seattle fault zone. We then model fault slip, constrained by shallow deformation, to show that the Seattle fault forms a fault propagation fold rather than the alternatively proposed roof thrust system. Fault slip modeling shows that back-thrust ruptures driven by moderate (M ∼6.5–6.7) earthquakes on the main thrust faults are consistent with the paleoseismic data. The results indicate that paleoseismic data from the back-thrust ruptures reveal the times of moderate earthquakes on the main fault system, rather than indicating smaller (M ∼5.5–6.0) earthquakes involving only the backthrusts. Estimates of cumulative shortening during known Seattle fault zone earthquakes support the inference that the Seattle fault has been the major seismic hazard in the northern Cascadia forearc in the late Holocene.
Fault Mechanics and Post-seismic Deformation at Bam, SE Iran
NASA Astrophysics Data System (ADS)
Wimpenny, S. E.; Copley, A.
2017-12-01
The extent to which aseismic deformation relaxes co-seismic stress changes on a fault zone is fundamental to assessing the future seismic hazard following any earthquake, and in understanding the mechanical behaviour of faults. We used models of stress-driven afterslip and visco-elastic relaxation, in conjunction with a dense time series of post-seismic InSAR measurements, to show that there has been minimal release of co-seismic stress changes through post-seismic deformation following the 2003 Mw 6.6 Bam earthquake. Our modelling indicates that the faults at Bam may remain predominantly locked, and that the co- plus inter-seismically accumulated elastic strain stored down-dip of the 2003 rupture patch may be released in a future Mw 6 earthquake. Modelling also suggests parts of the fault that experienced post-seismic creep between 2003-2009 overlapped with areas that also slipped co-seismically. Our observations and models also provide an opportunity to probe how aseismic fault slip leads to the growth of topography at Bam. We find that, for our modelled afterslip distribution to be consistent with forming the sharp step in the local topography at Bam over repeated earthquake cycles, and also to be consistent with the geodetic observations, requires either (1) far-field tectonic loading equivalent to a 2-10 MPa deviatoric stress acting across the fault system, which suggests it supports stresses 60-100 times less than classical views of static fault strength, or (2) that the fault surface has some form of mechanical anisotropy, potentially related to corrugations on the fault plane, that controls the sense of slip.
Statistical tests of simple earthquake cycle models
Devries, Phoebe M. R.; Evans, Eileen
2016-01-01
A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.
An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image
NASA Astrophysics Data System (ADS)
Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan
2017-10-01
It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.
Taking apart the Big Pine fault: Redefining a major structural feature in southern California
Onderdonk, N.W.; Minor, S.A.; Kellogg, K.S.
2005-01-01
New mapping along the Big Pine fault trend in southern California indicates that this structural alignment is actually three separate faults, which exhibit different geometries, slip histories, and senses of offset since Miocene time. The easternmost fault, along the north side of Lockwood Valley, exhibits left-lateral reverse Quaternary displacement but was a north dipping normal fault in late Oligocene to early Miocene time. The eastern Big Pine fault that bounds the southern edge of the Cuyama Badlands is a south dipping reverse fault that is continuous with the San Guillermo fault. The western segment of the Big Pine fault trend is a north dipping thrust fault continuous with the Pine Mountain fault and delineates the northern boundary of the rotated western Transverse Ranges terrane. This redefinition of the Big Pine fault differs greatly from the previous interpretation and significantly alters regional tectonic models and seismic risk estimates. The outcome of this study also demonstrates that basic geologic mapping is still needed to support the development of geologic models. Copyright 2005 by the American Geophysical Union.
Multi-Fault Rupture Scenarios in the Brawley Seismic Zone
NASA Astrophysics Data System (ADS)
Kyriakopoulos, C.; Oglesby, D. D.; Rockwell, T. K.; Meltzner, A. J.; Barall, M.
2017-12-01
Dynamic rupture complexity is strongly affected by both the geometric configuration of a network of faults and pre-stress conditions. Between those two, the geometric configuration is more likely to be anticipated prior to an event. An important factor in the unpredictability of the final rupture pattern of a group of faults is the time-dependent interaction between them. Dynamic rupture models provide a means to investigate this otherwise inscrutable processes. The Brawley Seismic Zone in Southern California is an area in which this approach might be important for inferring potential earthquake sizes and rupture patterns. Dynamic modeling can illuminate how the main faults in this area, the Southern San Andreas (SSAF) and Imperial faults, might interact with the intersecting cross faults, and how the cross faults may modulate rupture on the main faults. We perform 3D finite element modeling of potential earthquakes in this zone assuming an extended array of faults (Figure). Our results include a wide range of ruptures and fault behaviors depending on assumptions about nucleation location, geometric setup, pre-stress conditions, and locking depth. For example, in the majority of our models the cross faults do not strongly participate in the rupture process, giving the impression that they are not typically an aid or an obstacle to the rupture propagation. However, in some cases, particularly when rupture proceeds slowly on the main faults, the cross faults indeed can participate with significant slip, and can even cause rupture termination on one of the main faults. Furthermore, in a complex network of faults we should not preclude the possibility of a large event nucleating on a smaller fault (e.g. a cross fault) and eventually promoting rupture on the main structure. Recent examples include the 2010 Mw 7.1 Darfield (New Zealand) and Mw 7.2 El Mayor-Cucapah (Mexico) earthquakes, where rupture started on a smaller adjacent segment and later cascaded into a larger event. For that reason, we are investigating scenarios of a moderate rupture on a cross fault, and determining conditions under which the rupture will propagate onto the adjacent SSAF. Our investigation will provide fundamental insights that may help us interpret faulting behaviors in other areas, such as the complex Mw 7.8 2016 Kaikoura (New Zealand) earthquake.
Intelligent fault management for the Space Station active thermal control system
NASA Technical Reports Server (NTRS)
Hill, Tim; Faltisco, Robert M.
1992-01-01
The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.
NASA Technical Reports Server (NTRS)
Ferrell, Bob A.; Lewis, Mark E.; Perotti, Jose M.; Brown, Barbara L.; Oostdyk, Rebecca L.; Goetz, Jesse W.
2010-01-01
This paper's main purpose is to detail issues and lessons learned regarding designing, integrating, and implementing Fault Detection Isolation and Recovery (FDIR) for Constellation Exploration Program (CxP) Ground Operations at Kennedy Space Center (KSC). Part of the0 overall implementation of National Aeronautics and Space Administration's (NASA's) CxP, FDIR is being implemented in three main components of the program (Ares, Orion, and Ground Operations/Processing). While not initially part of the design baseline for the CxP Ground Operations, NASA felt that FDIR is important enough to develop, that NASA's Exploration Systems Mission Directorate's (ESMD's) Exploration Technology Development Program (ETDP) initiated a task for it under their Integrated System Health Management (ISHM) research area. This task, referred to as the FDIIR project, is a multi-year multi-center effort. The primary purpose of the FDIR project is to develop a prototype and pathway upon which Fault Detection and Isolation (FDI) may be transitioned into the Ground Operations baseline. Currently, Qualtech Systems Inc (QSI) Commercial Off The Shelf (COTS) software products Testability Engineering and Maintenance System (TEAMS) Designer and TEAMS RDS/RT are being utilized in the implementation of FDI within the FDIR project. The TEAMS Designer COTS software product is being utilized to model the system with Functional Fault Models (FFMs). A limited set of systems in Ground Operations are being modeled by the FDIR project, and the entire Ares Launch Vehicle is being modeled under the Functional Fault Analysis (FFA) project at Marshall Space Flight Center (MSFC). Integration of the Ares FFMs and the Ground Processing FFMs is being done under the FDIR project also utilizing the TEAMS Designer COTS software product. One of the most significant challenges related to integration is to ensure that FFMs developed by different organizations can be integrated easily and without errors. Software Interface Control Documents (ICDs) for the FFMs and their usage will be addressed as the solution to this issue. In particular, the advantages and disadvantages of these ICDs across physically separate development groups will be delineated.
Deep resistivity structure of Yucca Flat, Nevada Test Site, Nevada
Asch, Theodore H.; Rodriguez, Brian D.; Sampson, Jay A.; Wallin, Erin L.; Williams, Jackie M.
2006-01-01
The Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) at their Nevada Site Office are addressing groundwater contamination resulting from historical underground nuclear testing through the Environmental Management program and, in particular, the Underground Test Area project. One issue of concern is the nature of the somewhat poorly constrained pre Tertiary geology and its effects on ground-water flow in the area adjacent to a nuclear test. Ground water modelers would like to know more about the hydrostratigraphy and geologic structure to support a hydrostratigraphic framework model that is under development for the Yucca Flat Corrective Action Unit (CAU). During 2003, the U.S. Geological Survey, supported by the DOE and NNSA-NSO, collected and processed data from 51 magnetotelluric (MT) and audio-magnetotelluric (AMT) stations at the Nevada Test Site in and near Yucca Flat to assist in characterizing the pre-Tertiary geology in that area. The primary purpose was to refine the character, thickness, and lateral extent of pre Tertiary confining units. In particular, a major goal has been to define the upper clastic confining unit (late Devonian - Mississippian-age siliciclastic rocks assigned to the Eleana Formation and Chainman Shale) in the Yucca Flat area. The MT and AMT data have been released in separate USGS Open File Reports. The Nevada Test Site magnetotelluric data interpretation presented in this report includes the results of detailed two-dimensional (2 D) resistivity modeling for each profile (including alternative interpretations) and gross inferences on the three dimensional (3 D) character of the geology beneath each station. The character, thickness, and lateral extent of the Chainman Shale and Eleana Formation that comprise the Upper Clastic Confining Unit are generally well determined in the upper 5 km. Inferences can be made regarding the presence of the Lower Clastic Confining Unit at depths below 5 km. Large fault structures such as the CP Thrust fault, the Carpetbag fault, and the Yucca fault that cross Yucca Flat are also discernable as are other smaller faults. The subsurface electrical resistivity distribution and inferred geologic structures determined by this investigation should help constrain the hydrostratigraphic framework model that is under development.
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Garcia, Julio; Poggi, Valerio; Chen, Yen-Shin; Pagani, Marco
2016-04-01
The Global Earthquake Model (GEM) has, since its inception in 2009, made many contributions to the practice of seismic hazard modeling in different regions of the globe. The OpenQuake-engine (hereafter referred to simply as OpenQuake), GEM's open-source software for calculation of earthquake hazard and risk, has found application in many countries, spanning a diversity of tectonic environments. GEM itself has produced a database of national and regional seismic hazard models, harmonizing into OpenQuake's own definition the varied seismogenic sources found therein. The characterization of active faults in probabilistic seismic hazard analysis (PSHA) is at the centre of this process, motivating many of the developments in OpenQuake and presenting hazard modellers with the challenge of reconciling seismological, geological and geodetic information for the different regions of the world. Faced with these challenges, and from the experience gained in the process of harmonizing existing models of seismic hazard, four critical issues are addressed. The challenge GEM has faced in the development of software is how to define a representation of an active fault (both in terms of geometry and earthquake behaviour) that is sufficiently flexible to adapt to different tectonic conditions and levels of data completeness. By exploring the different fault typologies supported by OpenQuake we illustrate how seismic hazard calculations can, and do, take into account complexities such as geometrical irregularity of faults in the prediction of ground motion, highlighting some of the potential pitfalls and inconsistencies that can arise. This exploration leads to the second main challenge in active fault modeling, what elements of the fault source model impact most upon the hazard at a site, and when does this matter? Through a series of sensitivity studies we show how different configurations of fault geometry, and the corresponding characterisation of near-fault phenomena (including hanging wall and directivity effects) within modern ground motion prediction equations, can have an influence on the seismic hazard at a site. Yet we also illustrate the conditions under which these effects may be partially tempered when considering the full uncertainty in rupture behaviour within the fault system. The third challenge is the development of efficient means for representing both aleatory and epistemic uncertainties from active fault models in PSHA. In implementing state-of-the-art seismic hazard models into OpenQuake, such as those recently undertaken in California and Japan, new modeling techniques are needed that redefine how we treat interdependence of ruptures within the model (such as mutual exclusivity), and the propagation of uncertainties emerging from geology. Finally, we illustrate how OpenQuake, and GEM's additional toolkits for model preparation, can be applied to address long-standing issues in active fault modeling in PSHA. These include constraining the seismogenic coupling of a fault and the partitioning of seismic moment between the active fault surfaces and the surrounding seismogenic crust. We illustrate some of the possible roles that geodesy can play in the process, but highlight where this may introduce new uncertainties and potential biases into the seismic hazard process, and how these can be addressed.
Simpson, R.W.; Lienkaemper, J.J.; Galehouse, J.S.
2001-01-01
Variations ill surface creep rate along the Hayward fault are modeled as changes in locking depth using 3D boundary elements. Model creep is driven by screw dislocations at 12 km depth under the Hayward and other regional faults. Inferred depth to locking varies along strike from 4-12 km. (12 km implies no locking.) Our models require locked patches under the central Hayward fault, consistent with a M6.8 earthquake in 1868, but the geometry and extent of locking under the north and south ends depend critically on assumptions regarding continuity and creep behavior of the fault at its ends. For the northern onshore part of the fault, our models contain 1.4-1.7 times more stored moment than the model of Bu??rgmann et al. [2000]; 45-57% of this stored moment resides in creeping areas. It is important for seismic hazard estimation to know how much of this moment is released coseismically or as aseismic afterslip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taniguchi, Chisato; Ichimura, Aiko; Ohtani, Noboru, E-mail: ohtani.noboru@kwansei.ac.jp
The formation of basal plane stacking faults in heavily nitrogen-doped 4H-SiC crystals was theoretically investigated. A novel theoretical model based on the so-called quantum well action mechanism was proposed; the model considers several factors, which were overlooked in a previously proposed model, and provides a detailed explanation of the annealing-induced formation of double layer Shockley-type stacking faults in heavily nitrogen-doped 4H-SiC crystals. We further revised the model to consider the carrier distribution in the depletion regions adjacent to the stacking fault and successfully explained the shrinkage of stacking faults during annealing at even higher temperatures. The model also succeeded inmore » accounting for the aluminum co-doping effect in heavily nitrogen-doped 4H-SiC crystals, in that the stacking fault formation is suppressed when aluminum acceptors are co-doped in the crystals.« less
NASA Astrophysics Data System (ADS)
Bi, Haiyun; Zheng, Wenjun; Ge, Weipeng; Zhang, Peizhen; Zeng, Jiangyuan; Yu, Jingxing
2018-03-01
Reconstruction of the along-fault slip distribution provides an insight into the long-term rupture patterns of a fault, thereby enabling more accurate assessment of its future behavior. The increasing wealth of high-resolution topographic data, such as Light Detection and Ranging and photogrammetric digital elevation models, allows us to better constrain the slip distribution, thus greatly improving our understanding of fault behavior. The South Heli Shan Fault is a major active fault on the northeastern margin of the Tibetan Plateau. In this study, we built a 2 m resolution digital elevation model of the South Heli Shan Fault based on high-resolution GeoEye-1 stereo satellite imagery and then measured 302 vertical displacements along the fault, which increased the measurement density of previous field surveys by a factor of nearly 5. The cumulative displacements show an asymmetric distribution along the fault, comprising three major segments. An increasing trend from west to east indicates that the fault has likely propagated westward over its lifetime. The topographic relief of Heli Shan shows an asymmetry similar to the measured cumulative slip distribution, suggesting that the uplift of Heli Shan may result mainly from the long-term activity of the South Heli Shan Fault. Furthermore, the cumulative displacements divide into discrete clusters along the fault, indicating that the fault has ruptured in several large earthquakes. By constraining the slip-length distribution of each rupture, we found that the events do not support a characteristic recurrence model for the fault.
NASA Astrophysics Data System (ADS)
Gómez-Romeu, J.; Kusznir, N.; Manatschal, G.; Roberts, A.
2017-12-01
During the formation of magma-poor rifted margins, upper lithosphere thinning and stretching is achieved by extensional faulting, however, there is still debate and uncertainty how faults evolve during rifting leading to breakup. Seismic data provides an image of the present-day structural and stratigraphic configuration and thus initial fault geometry is unknown. To understand the geometric evolution of extensional faults at rifted margins it is extremely important to also consider the flexural response of the lithosphere produced by fault displacement resulting in footwall uplift and hangingwall subsidence. We investigate how the flexural isostatic response to extensional faulting controls the structural development of rifted margins. To achieve our aim, we use a kinematic forward model (RIFTER) which incorporates the flexural isostatic response to extensional faulting, crustal thinning, lithosphere thermal loads, sedimentation and erosion. Inputs for RIFTER are derived from seismic reflection interpretation and outputs of RIFTER are the prediction of the structural and stratigraphic consequences of recursive sequential faulting and sedimentation. Using RIFTER we model the simultaneous tectonic development of the Iberia-Newfoundland conjugate rifted margins along the ISE01-SCREECH1 and TGS/LG12-SCREECH2 seismic lines. We quantitatively test and calibrate the model against observed target data restored to breakup time. Two quantitative methods are used to obtain this target data: (i) gravity anomaly inversion which predicts Moho depth and continental lithosphere thinning and (ii) reverse post-rift subsidence modelling to give water and Moho depths at breakup time. We show that extensional faulting occurs on steep ( 60°) normal faults in both proximal and distal parts of rifted margins. Extensional faults together with their flexural isostatic response produce not only sub-horizontal exhumed footwall surfaces (i.e. the rolling hinge model) and highly rotated (60° or more) pre- and syn-rift stratigraphy, but also extensional allochthons underlain by apparent horizontal detachments. These detachment faults were never active in this sub-horizontal geometry; they were only active as steep faults which were isostatically rotated to their present sub-horizontal position.
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.
NASA Astrophysics Data System (ADS)
Madden, E. H.; Pollard, D. D.
2009-12-01
Multi-fault, strike-slip earthquakes have proved difficult to incorporate into seismic hazard analyses due to the difficulty of determining the probability of these ruptures, despite collection of extensive data associated with such events. Modeling the mechanical behavior of these complex ruptures contributes to a better understanding of their occurrence by elucidating the relationship between surface and subsurface earthquake activity along transform faults. This insight is especially important for hazard mitigation, as multi-fault systems can produce earthquakes larger than those associated with any one fault involved. We present a linear elastic, quasi-static model of the southern portion of the 28 June 1992 Landers earthquake built in the boundary element software program Poly3D. This event did not rupture the extent of any one previously mapped fault, but trended 80km N and NW across segments of five sub-parallel, N-S and NW-SE striking faults. At M7.3, the earthquake was larger than the potential earthquakes associated with the individual faults that ruptured. The model extends from the Johnson Valley Fault, across the Landers-Kickapoo Fault, to the Homestead Valley Fault, using data associated with a six-week time period following the mainshock. It honors the complex surface deformation associated with this earthquake, which was well exposed in the desert environment and mapped extensively in the field and from aerial photos in the days immediately following the earthquake. Thus, the model incorporates the non-linearity and segmentation of the main rupture traces, the irregularity of fault slip distributions, and the associated secondary structures such as strike-slip splays and thrust faults. Interferometric Synthetic Aperture Radar (InSAR) images of the Landers event provided the first satellite images of ground deformation caused by a single seismic event and provide constraints on off-fault surface displacement in this six-week period. Insight is gained by comparing the density, magnitudes and focal plane orientations of relocated aftershocks for this time frame with the magnitude and orientation of planes of maximum Coulomb shear stress around the fault planes at depth.
Reducing a Knowledge-Base Search Space When Data Are Missing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.
NASA Astrophysics Data System (ADS)
Rundle, J. B.
2017-12-01
Earthquakes and financial markets share surprising similarities [1]. For example, the well-known VIX index, which by definition is the implied volatility of the Standard and Poors 500 index, behaves in very similar quantitative fashion to time series for earthquake rates. Both display sudden increases at the time of an earthquake or an announcement of the US Federal Reserve Open Market Committee [2], and both decay as an inverse power of time. Both can be regarded as examples of first order phase transitions [1], and display fractal and scaling behavior associated with critical transitions, such as power-law magnitude-frequency relations in the tails of the distributions. Early quantitative investors such as Edward Thorpe and John Kelly invented novel methods to mitigate or manage risk in games of chance such as blackjack, and in markets using hedging techniques that are still in widespread use today. The basic idea is the concept of proportional betting, where the gambler/investor bets a fraction of the bankroll whose size is determined by the "edge" or inside knowledge of the real (and changing) odds. For earthquake systems, the "edge" over nature can only exist in the form of a forecast (probability of a future earthquake); a nowcast (knowledge of the current state of an earthquake fault system); or a timecast (statistical estimate of the waiting time until the next major earthquake). In our terminology, a forecast is a model, while the nowcast and timecast are analysis methods using observed data only (no model). We also focus on defined geographic areas rather than on faults, thereby eliminating the need to consider specific fault data or fault interactions. Data used are online earthquake catalogs, generally since 1980. Forecasts are based on the Weibull (1952) probability law, and only a handful of parameters are needed. These methods allow the development of real time hazard and risk estimation using cloud-based technologies, and permit the application of quantitative backtesting techniques. In addition, the similarities to the financial markets point us toward similar hedging strategies to mitigate and manage earthquake risk. [1] https://millervalue.com/?s=earthquakes [2] A.M. Person et al., Phys. Rev. E, 81, 066121, (2010)
Network Connectivity for Permanent, Transient, Independent, and Correlated Faults
NASA Technical Reports Server (NTRS)
White, Allan L.; Sicher, Courtney; henry, Courtney
2012-01-01
This paper develops a method for the quantitative analysis of network connectivity in the presence of both permanent and transient faults. Even though transient noise is considered a common occurrence in networks, a survey of the literature reveals an emphasis on permanent faults. Transient faults introduce a time element into the analysis of network reliability. With permanent faults it is sufficient to consider the faults that have accumulated by the end of the operating period. With transient faults the arrival and recovery time must be included. The number and location of faults in the system is a dynamic variable. Transient faults also introduce system recovery into the analysis. The goal is the quantitative assessment of network connectivity in the presence of both permanent and transient faults. The approach is to construct a global model that includes all classes of faults: permanent, transient, independent, and correlated. A theorem is derived about this model that give distributions for (1) the number of fault occurrences, (2) the type of fault occurrence, (3) the time of the fault occurrences, and (4) the location of the fault occurrence. These results are applied to compare and contrast the connectivity of different network architectures in the presence of permanent, transient, independent, and correlated faults. The examples below use a Monte Carlo simulation, but the theorem mentioned above could be used to guide fault-injections in a laboratory.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...
2012-12-20
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.
Wang, Tao; Xie, Wenfang; Zhang, Youmin
2012-05-01
In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Fault geometries in basement-induced wrench faulting under different initial stress states
NASA Astrophysics Data System (ADS)
Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.
Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.
NASA Astrophysics Data System (ADS)
Elbanna, A. E.
2013-12-01
Numerous field and experimental observations suggest that faults surfaces are rough at multiple scales and tend to produce a wide range of branch sizes ranging from micro-branching to large scale secondary faults. The development and evolution of fault roughness and branching is believed to play an important role in rupture dynamics and energy partitioning. Previous work by several groups has succeeded in determining conditions under which a main rupture may branch into a secondary fault. Recently, there great progress has been made in investigating rupture propagation on rough faults with and without off-fault plasticity. Nonetheless, in most of these models the heterogeneity, whether the roughness profile or the secondary faults orientation, was built into the system from the beginning and consequently the final outcome depends strongly on the initial conditions. Here we introduce an adaptive mesh technique for modeling mode-II crack propagation on slip weakening frictional interfaces. We use a Finite Element Framework with random mesh topology that adapts to crack dynamics through element splitting and sequential insertion of frictional interfaces dictated by the failure criterion. This allows the crack path to explore non-planar paths and develop the roughness profile that is most compatible with the dynamical constraints. It also enables crack branching at different scales. We quantify energy dissipation due to the roughening process and small scale branching. We compare the results of our model to a reference case for propagation on a planar fault. We show that the small scale processes of roughening and branching influence many characteristics of the rupture propagation including the energy partitioning, rupture speed and peak slip rates. We also estimate the fracture energy required for propagating a crack on a planar fault that will be required to produce comparable results. We anticipate that this modeling approach provides an attractive methodology that complements the current efforts in modeling off-fault plasticity and damage.
A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM
NASA Astrophysics Data System (ADS)
Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan
2018-03-01
In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.
NASA Technical Reports Server (NTRS)
Ashworth, Barry R.
1989-01-01
A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.
NASA Astrophysics Data System (ADS)
La Femina, P.; Weber, J. C.; Geirsson, H.; Latchman, J. L.; Robertson, R. E. A.; Higgins, M.; Miller, K.; Churches, C.; Shaw, K.
2017-12-01
We studied active faults in Trinidad and Tobago in the Caribbean-South American (CA-SA) transform plate boundary zone using episodic GPS (eGPS) data from 19 sites and continuous GPS (cGPS) data from 8 sites, then by modeling these data using a series of simple screw dislocation models. Our best-fit model for interseismic (interseimic = between major earthquakes) fault slip requires: 12-15 mm/yr of right-lateral movement and very shallow locking (0.2 ± 0.2 km; essentially creep) across the Central Range Fault (CRF); 3.4 +0.3/-0.2 mm/yr across the Soldado Fault in south Trinidad, and 3.5 +0.3/-0.2 mm/yr of dextral shear on fault(s) between Trinidad and Tobago. The upper-crustal faults in Trinidad show very little seismicity (1954-current from local network) and do not appear to have generated significant historic earthquakes. However, paleoseismic studies indicate that the CRF ruptured between 2710 and 500 yr. B.P. and thus it was recently capable of storing elastic strain. Together, these data suggest spatial and/or temporal fault segmentation on the CRF. The CRF marks a physical boundary between rocks associated with thermogenically generated petroleum and over-pressured fluids in south and central Trinidad, from rocks containing only biogenic gas to the north, and a long string of active mud volcanoes align with the trace of the Soldado Fault along Trinidad's south coast. Fluid (oil and gas) overpressure, as an alternative or in addition to weak mineral phases in the fault zone, may thus cause the CRF fault creep and the lack of seismicity that we observe.
The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Tai, Ann T.
2000-01-01
The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.
NASA Astrophysics Data System (ADS)
Sun, Y.; Luo, G.
2017-12-01
Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.
Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.
Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C
2015-06-01
An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Sowers, T. Shane; Maul, William A.
2005-01-01
The constraints of future Exploration Missions will require unique Integrated System Health Management (ISHM) capabilities throughout the mission. An ambitious launch schedule, human-rating requirements, long quiescent periods, limited human access for repair or replacement, and long communication delays all require an ISHM system that can span distinct yet interdependent vehicle subsystems, anticipate failure states, provide autonomous remediation, and support the Exploration Mission from beginning to end. NASA Glenn Research Center has developed and applied health management system technologies to aerospace propulsion systems for almost two decades. Lessons learned from past activities help define the approach to proper ISHM development: sensor selection- identifies sensor sets required for accurate health assessment; data qualification and validation-ensures the integrity of measurement data from sensor to data system; fault detection and isolation-uses measurements in a component/subsystem context to detect faults and identify their point of origin; information fusion and diagnostic decision criteria-aligns data from similar and disparate sources in time and use that data to perform higher-level system diagnosis; and verification and validation-uses data, real or simulated, to provide variable exposure to the diagnostic system for faults that may only manifest themselves in actual implementation, as well as faults that are detectable via hardware testing. This presentation describes a framework for developing health management systems and highlights the health management research activities performed by the Controls and Dynamics Branch at the NASA Glenn Research Center. It illustrates how those activities contribute to the development of solutions for Integrated System Health Management.
Observations, models, and mechanisms of failure of surface rocks surrounding planetary surface loads
NASA Technical Reports Server (NTRS)
Schultz, R. A.; Zuber, M. T.
1994-01-01
Geophysical models of flexural stresses in an elastic lithosphere due to an axisymmetric surface load typically predict a transition with increased distance from the center of the load of radial thrust faults to strike-slip faults to concentric normal faults. These model predictions are in conflict with the absence of annular zones of strike-slip faults around prominent loads such as lunar maria, Martian volcanoes, and the Martian Tharsis rise. We suggest that this paradox arises from difficulties in relating failure criteria for brittle rocks to the stress models. Indications that model stresses are inappropriate for use in fault-type prediction include (1) tensile principal stresses larger than realistic values of rock tensile strength, and/or (2) stress differences significantly larger than those allowed by rock-strength criteria. Predictions of surface faulting that are consistent with observations can be obtained instead by using tensile and shear failure criteria, along with calculated stress differences and trajectories, with model stress states not greatly in excess of the maximum allowed by rock fracture criteria.
Oceanic transform faults: how and why do they form? (Invited)
NASA Astrophysics Data System (ADS)
Gerya, T.
2013-12-01
Oceanic transform faults at mid-ocean ridges are often considered to be the direct product of plate breakup process (cf. review by Gerya, 2012). In contrast, recent 3D thermomechanical numerical models suggest that transform faults are plate growth structures, which develop gradually on a timescale of few millions years (Gerya, 2010, 2013a,b). Four subsequent stages are predicted for the transition from rifting to spreading (Gerya, 2013b): (1) crustal rifting, (2) multiple spreading centers nucleation and propagation, (3) proto-transform faults initiation and rotation and (4) mature ridge-transform spreading. Geometry of the mature ridge-transform system is governed by geometrical requirements for simultaneous accretion and displacement of new plate material within two offset spreading centers connected by a sustaining rheologically weak transform fault. According to these requirements, the characteristic spreading-parallel orientation of oceanic transform faults is the only thermomechanically consistent steady state orientation. Comparison of modeling results with the Woodlark Basin suggests that the development of this incipient spreading region (Taylor et al., 2009) closely matches numerical predictions (Gerya, 2013b). Model reproduces well characteristic 'rounded' contours of the spreading centers as well as the presence of a remnant of the broken continental crustal bridge observed in the Woodlark basin. Similarly to the model, the Moresby (proto)transform terminates in the oceanic rather than in the continental crust. Transform margins and truncated tip of one spreading center present in the model are documented in nature. In addition, numerical experiments suggest that transform faults can develop gradually at mature linear mid-ocean ridges as the result of dynamical instability (Gerya, 2010). Boundary instability from asymmetric plate growth can spontaneously start in alternate directions along successive ridge sections; the resultant curved ridges become transform faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps. The ridge instability is governed by rheological weakening of active fault structures. The instability is most efficient for slow to intermediate spreading rates, whereas ultraslow and (ultra)fast spreading rates tend to destabilize transform faults (Gerya, 2010; Püthe and Gerya, 2013) References Gerya, T. (2010) Dynamical instability produces transform faults at mid-ocean ridges. Science, 329, 1047-1050. Gerya, T. (2012) Origin and models of oceanic transform faults. Tectonophys., 522-523, 34-56 Gerya, T.V. (2013a) Three-dimensional thermomechanical modeling of oceanic spreading initiation and evolution. Phys. Earth Planet. Interiors, 214, 35-52. Gerya, T.V. (2013b) Initiation of transform faults at rifted continental margins: 3D petrological-thermomechanical modeling and comparison to the Woodlark Basin. Petrology, 21, 1-10. Püthe, C., Gerya, T.V. (2013) Dependence of mid-ocean ridge morphology on spreading rate in numerical 3-D models. Gondwana Res., DOI: http://dx.doi.org/10.1016/j.gr.2013.04.005 Taylor, B., Goodliffe, A., Martinez, F. (2009) Initiation of transform faults at rifted continental margins. Comptes Rendus Geosci., 341, 428-438.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
Jiang, Yu; Zhang, Xiaogang; Zhang, Chao; Li, Zhixiong; Sheng, Chenxing
2017-04-01
Numerical modeling has been recognized as the dispensable tools for mechanical fault mechanism analysis. Techniques, ranging from macro to nano levels, include the finite element modeling boundary element modeling, modular dynamic modeling, nano dynamic modeling and so forth. This work firstly reviewed the progress on the fault mechanism analysis for gear transmissions from the tribological and dynamic aspects. Literature review indicates that the tribological and dynamic properties were separately investigated to explore the fault mechanism in gear transmissions. However, very limited work has been done to address the links between the tribological and dynamic properties and scarce researches have been done for coal cutting machines. For this reason, the tribo-dynamic coupled model was introduced to bridge the gap between the tribological and dynamic models in fault mechanism analysis for gear transmissions in coal cutting machines. The modular dynamic modeling and nano dynamic modeling techniques are expected to establish the links between the tribological and dynamic models. Possible future research directions using the tribo dynamic coupled model were summarized to provide potential references for researchers in the field.
Strain Accumulation and Release of the Gorkha, Nepal, Earthquake (M w 7.8, 25 April 2015)
NASA Astrophysics Data System (ADS)
Morsut, Federico; Pivetta, Tommaso; Braitenberg, Carla; Poretti, Giorgio
2017-08-01
The near-fault GNSS records of strong-ground movement are the most sensitive for defining the fault rupture. Here, two unpublished GNSS records are studied, a near-fault-strong-motion station (NAGA) and a distant station in a poorly covered area (PYRA). The station NAGA, located above the Gorkha fault, sensed a southward displacement of almost 1.7 m. The PYRA station that is positioned at a distance of about 150 km from the fault, near the Pyramid station in the Everest, showed static displacements in the order of some millimeters. The observed displacements were compared with the calculated displacements of a finite fault model in an elastic halfspace. We evaluated two slips on fault models derived from seismological and geodetic studies: the comparison of the observed and modelled fields reveals that our displacements are in better accordance with the geodetic derived fault model than the seismologic one. Finally, we evaluate the yearly strain rate of four GNSS stations in the area that were recording continuously the deformation field for at least 5 years. The strain rate is then compared with the strain released by the Gorkha earthquake, leading to an interval of 235 years to store a comparable amount of elastic energy. The three near-fault GNSS stations require a slightly wider fault than published, in the case of an equivalent homogeneous rupture, with an average uniform slip of 3.5 m occurring on an area of 150 km × 60 km.
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
Subsurface geometry and evolution of the Seattle fault zone and the Seattle Basin, Washington
ten Brink, Uri S.; Molzer, P.C.; Fisher, M.A.; Blakely, R.J.; Bucknam, R.C.; Parsons, T.; Crosson, R.S.; Creager, K.C.
2002-01-01
The Seattle fault, a large, seismically active, east-west-striking fault zone under Seattle, is the best-studied fault within the tectonically active Puget Lowland in western Washington, yet its subsurface geometry and evolution are not well constrained. We combine several analysis and modeling approaches to study the fault geometry and evolution, including depth-converted, deep-seismic-reflection images, P-wave-velocity field, gravity data, elastic modeling of shoreline uplift from a late Holocene earthquake, and kinematic fault restoration. We propose that the Seattle thrust or reverse fault is accompanied by a shallow, antithetic reverse fault that emerges south of the main fault. The wedge enclosed by the two faults is subject to an enhanced uplift, as indicated by the boxcar shape of the shoreline uplift from the last major earthquake on the fault zone. The Seattle Basin is interpreted as a flexural basin at the footwall of the Seattle fault zone. Basin stratigraphy and the regional tectonic history lead us to suggest that the Seattle fault zone initiated as a reverse fault during the middle Miocene, concurrently with changes in the regional stress field, to absorb some of the north-south shortening of the Cascadia forearc. Kingston Arch, 30 km north of the Seattle fault zone, is interpreted as a more recent disruption arising within the basin, probably due to the development of a blind reverse fault.
Strike-Slip Fault Patterns on Europa: Obliquity or Polar Wander?
NASA Technical Reports Server (NTRS)
Rhoden, Alyssa Rose; Hurford, Terry A.; Manga, Michael
2011-01-01
Variations in diurnal tidal stress due to Europa's eccentric orbit have been considered as the driver of strike-slip motion along pre-existing faults, but obliquity and physical libration have not been taken into account. The first objective of this work is to examine the effects of obliquity on the predicted global pattern of fault slip directions based on a tidal-tectonic formation model. Our second objective is to test the hypothesis that incorporating obliquity can reconcile theory and observations without requiring polar wander, which was previously invoked to explain the mismatch found between the slip directions of 192 faults on Europa and the global pattern predicted using the eccentricity-only model. We compute predictions for individual, observed faults at their current latitude, longitude, and azimuth with four different tidal models: eccentricity only, eccentricity plus obliquity, eccentricity plus physical libration, and a combination of all three effects. We then determine whether longitude migration, presumably due to non-synchronous rotation, is indicated in observed faults by repeating the comparisons with and without obliquity, this time also allowing longitude translation. We find that a tidal model including an obliquity of 1.2?, along with longitude migration, can predict the slip directions of all observed features in the survey. However, all but four faults can be fit with only 1? of obliquity so the value we find may represent the maximum departure from a lower time-averaged obliquity value. Adding physical libration to the obliquity model improves the accuracy of predictions at the current locations of the faults, but fails to predict the slip directions of six faults and requires additional degrees of freedom. The obliquity model with longitude migration is therefore our preferred model. Although the polar wander interpretation cannot be ruled out from these results alone, the obliquity model accounts for all observations with a value consistent with theoretical expectations and cycloid modeling.
Chasing the Garlock: A study of tectonic response to vertical axis rotation
NASA Astrophysics Data System (ADS)
Guest, Bernard; Pavlis, Terry L.; Golding, Heather; Serpa, Laura
2003-06-01
Vertical-axis, clockwise block rotations in the Northeast Mojave block are well documented by numerous authors. However, the effects of these rotations on the crust to the north of the Northeast Mojave block have remained unexplored. In this paper we present a model that results from mapping and geochronology conducted in the north and central Owlshead Mountains. The model suggests that some or all of the transtension and rotation observed in the Owlshead Mountains results from tectonic response to a combination of clockwise block rotation in the Northeast Mojave block and Basin and Range extension. The Owlshead Mountains are effectively an accommodation zone that buffers differential extension between the Northeast Mojave block and the Basin and Range. In addition, our model explores the complex interactions that occur between faults and fault blocks at the junction of the Garlock, Brown Mountain, and Owl Lake faults. We hypothesize that the bending of the Garlock fault by rotation of the Northeast Mojave block resulted in a misorientation of the Garlock that forced the Owl Lake fault to break in order to accommodate slip on the western Garlock fault. Subsequent sinistral slip on the Owl Lake fault offset the Garlock, creating the now possibly inactive Mule Springs strand of the Garlock fault. Dextral slip on the Brown Mountain fault then locked the Owl Lake fault, forcing the active Leach Lake strand of the Garlock fault to break.
NASA Astrophysics Data System (ADS)
Nuñez, R. C.; Griffith, W. A.; Mitchell, T. M.; Marquardt, C.; Iturrieta, P. C.; Cembrano, J. M.
2017-12-01
Obliquely convergent subduction orogens show both margin-parallel and margin-oblique fault systems that are spatially and temporally associated with ore deposits and geothermal systems within the volcanic arc. Fault orientation and mechanical interaction among different fault systems influence the stress field in these arrangements, thus playing a first order control on the regional to local-scale fluid migration paths as documented by the spatial distribution of fault-vein arrays. Our selected case study is a Miocene porphyry copper-type system that crops out in the precordillera of the Maule region along the Teno river Valley (ca. 35°S). Several regional to local faults were recognized in the field: (1) Two first-order, N-striking subvertical dextral faults overlapping at a right stepover; (2) Second-order, N60°E-striking steeply-dipping, dextral-normal faults located at the stepover, and (3) N40°-60°W striking subvertical, sinistral faults crossing the stepover zone. The regional and local scale geology is characterized by volcano-sedimentary rocks (Upper Eocene- Lower Miocene), intruded by Miocene granodioritic plutons (U-Pb zircon age of 18.2 ± 0.11 Ma) and coeval dikes. We implement a 2D boundary element displacement discontinuity method (BEM) model to test the mechanical feasibility of kinematic model of the structural development of the porphyry copper-type system in the stepover between N-striking faults. The model yields the stress field within the stepover region and shows slip and potential opening distribution along the N-striking master faults under a regionally imposed stress field. The model shows that σ1 rotates clockwise where the main faults approach each other, becoming EW when they overlap. This, in turn leads to the generation of both NE- and NW-striking faults within the stepover area. Model results are consistent with the structural and kinematic data collected in the field attesting for enhanced permeability and fluid flow transport and arrest spatially associated with the stepover.
NASA Astrophysics Data System (ADS)
Paya, B. A.; Esat, I. I.; Badi, M. N. M.
1997-09-01
The purpose of condition monitoring and fault diagnostics are to detect and distinguish faults occurring in machinery, in order to provide a significant improvement in plant economy, reduce operational and maintenance costs and improve the level of safety. The condition of a model drive-line, consisting of various interconnected rotating parts, including an actual vehicle gearbox, two bearing housings, and an electric motor, all connected via flexible couplings and loaded by a disc brake, was investigated. This model drive-line was run in its normal condition, and then single and multiple faults were introduced intentionally to the gearbox, and to the one of the bearing housings. These single and multiple faults studied on the drive-line were typical bearing and gear faults which may develop during normal and continuous operation of this kind of rotating machinery. This paper presents the investigation carried out in order to study both bearing and gear faults introduced first separately as a single fault and then together as multiple faults to the drive-line. The real time domain vibration signals obtained for the drive-line were preprocessed by wavelet transforms for the neural network to perform fault detection and identify the exact kinds of fault occurring in the model drive-line. It is shown that by using multilayer artificial neural networks on the sets of preprocessed data by wavelet transforms, single and multiple faults were successfully detected and classified into distinct groups.
On-line diagnosis of inter-turn short circuit fault for DC brushed motor.
Zhang, Jiayuan; Zhan, Wei; Ehsani, Mehrdad
2018-06-01
Extensive research effort has been made in fault diagnosis of motors and related components such as winding and ball bearing. In this paper, a new concept of inter-turn short circuit fault for DC brushed motors is proposed to include the short circuit ratio and short circuit resistance. A first-principle model is derived for motors with inter-turn short circuit fault. A statistical model based on Hidden Markov Model is developed for fault diagnosis purpose. This new method not only allows detection of motor winding short circuit fault, it can also provide estimation of the fault severity, as indicated by estimation of the short circuit ratio and the short circuit resistance. The estimated fault severity can be used for making appropriate decisions in response to the fault condition. The feasibility of the proposed methodology is studied for inter-turn short circuit of DC brushed motors using simulation in MATLAB/Simulink environment. In addition, it is shown that the proposed methodology is reliable with the presence of small random noise in the system parameters and measurement. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
NASA Astrophysics Data System (ADS)
Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.
2018-02-01
The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.
Barall, Michael
2009-01-01
We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.
NASA Astrophysics Data System (ADS)
van Gent, Heijn W.; Holland, Marc; Urai, Janos L.; Loosveld, Ramon
2010-09-01
We present analogue models of the formation of dilatant normal faults and fractures in carbonate fault zones, using cohesive hemihydrate powder (CaSO 4·½H 2O). The evolution of these dilatant fault zones involves a range of processes such as fragmentation, gravity-driven breccia transport and the formation of dilatant jogs. To allow scaling to natural prototypes, extensive material characterisation was done. This showed that tensile strength and cohesion depend on the state of compaction, whereas the friction angle remains approximately constant. In our models, tensile strength of the hemihydrate increases with depth from 9 to 50 Pa, while cohesion increases from 40 to 250 Pa. We studied homogeneous and layered material sequences, using sand as a relatively weak layer and hemihydrate/graphite mixtures as a slightly stronger layer. Deformation was analyzed by time-lapse photography and Particle Image Velocimetry (PIV) to calculate the evolution of the displacement field. With PIV the initial, predominantly elastic deformation and progressive localization of deformation are observed in detail. We observed near-vertical opening-mode fractures near the surface. With increasing depth, dilational shear faults were dominant, with releasing jogs forming at fault-dip variations. A transition to non-dilatant shear faults was observed near the bottom of the model. In models with mechanical stratigraphy, fault zones are more complex. The inferred stress states and strengths in different parts of the model agree with the observed transitions in the mode of deformation.
Tectonic stressing in California modeled from GPS observations
Parsons, T.
2006-01-01
What happens in the crust as a result of geodetically observed secular motions? In this paper we find out by distorting a finite element model of California using GPS-derived displacements. A complex model was constructed using spatially varying crustal thickness, geothermal gradient, topography, and creeping faults. GPS velocity observations were interpolated and extrapolated across the model and boundary condition areas, and the model was loaded according to 5-year displacements. Results map highest differential stressing rates in a 200-km-wide band along the Pacific-North American plate boundary, coinciding with regions of greatest seismic energy release. Away from the plate boundary, GPS-derived crustal strain reduces modeled differential stress in some places, suggesting that some crustal motions are related to topographic collapse. Calculated stressing rates can be resolved onto fault planes: useful for addressing fault interactions and necessary for calculating earthquake advances or delays. As an example, I examine seismic quiescence on the Garlock fault despite a calculated minimum 0.1-0.4 MPa static stress increase from the 1857 M???7.8 Fort Tejon earthquake. Results from finite element modeling show very low to negative secular Coulomb stress growth on the Garlock fault, suggesting that the stress state may have been too low for large earthquake triggering. Thus the Garlock fault may only be stressed by San Andreas fault slip, a loading pattern that could explain its erratic rupture history.
Diagnostic and Prognostic Models for Generator Step-Up Transformers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham
In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of faultmore » signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.« less
Dislocation model for aseismic fault slip in the transverse ranges of Southern California
NASA Technical Reports Server (NTRS)
Cheng, A.; Jackson, D. D.; Matsuura, M.
1985-01-01
Geodetic data at a plate boundary can reveal the pattern of subsurface displacements that accompany plate motion. These displacements are modelled as the sum of rigid block motion and the elastic effects of frictional interaction between blocks. The frictional interactions are represented by uniform dislocation on each of several rectangular fault patches. The block velocities and fault parameters are then estimated from geodetic data. Bayesian inversion procedure employs prior estimates based on geological and seismological data. The method is applied to the Transverse Ranges, using prior geological and seismological data and geodetic data from the USGS trilateration networks. Geodetic data imply a displacement rate of about 20 mm/yr across the San Andreas Fault, while the geologic estimates exceed 30 mm/yr. The prior model and the final estimates both imply about 10 mm/yr crustal shortening normal to the trend of the San Andreas Fault. Aseismic fault motion is a major contributor to plate motion. The geodetic data can help to identify faults that are suffering rapid stress accumulation; in the Transverse Ranges those faults are the San Andreas and the Santa Susana.
A formally verified algorithm for interactive consistency under a hybrid fault model
NASA Technical Reports Server (NTRS)
Lincoln, Patrick; Rushby, John
1993-01-01
Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.
Nitsche Extended Finite Element Methods for Earthquake Simulation
NASA Astrophysics Data System (ADS)
Coon, Ethan T.
Modeling earthquakes and geologically short-time-scale events on fault networks is a difficult problem with important implications for human safety and design. These problems demonstrate a. rich physical behavior, in which distributed loading localizes both spatially and temporally into earthquakes on fault systems. This localization is governed by two aspects: friction and fault geometry. Computationally, these problems provide a stern challenge for modelers --- static and dynamic equations must be solved on domains with discontinuities on complex fault systems, and frictional boundary conditions must be applied on these discontinuities. The most difficult aspect of modeling physics on complicated domains is the mesh. Most numerical methods involve meshing the geometry; nodes are placed on the discontinuities, and edges are chosen to coincide with faults. The resulting mesh is highly unstructured, making the derivation of finite difference discretizations difficult. Therefore, most models use the finite element method. Standard finite element methods place requirements on the mesh for the sake of stability, accuracy, and efficiency. The formation of a mesh which both conforms to fault geometry and satisfies these requirements is an open problem, especially for three dimensional, physically realistic fault. geometries. In addition, if the fault system evolves over the course of a dynamic simulation (i.e. in the case of growing cracks or breaking new faults), the geometry must he re-meshed at each time step. This can be expensive computationally. The fault-conforming approach is undesirable when complicated meshes are required, and impossible to implement when the geometry is evolving. Therefore, meshless and hybrid finite element methods that handle discontinuities without placing them on element boundaries are a desirable and natural way to discretize these problems. Several such methods are being actively developed for use in engineering mechanics involving crack propagation and material failure. While some theory and application of these methods exist, implementations for the simulation of networks of many cracks have not yet been considered. For my thesis, I implement and extend one such method, the eXtended Finite Element Method (XFEM), for use in static and dynamic models of fault networks. Once this machinery is developed, it is applied to open questions regarding the behavior of networks of faults, including questions of distributed deformation in fault systems and ensembles of magnitude, location, and frequency in repeat ruptures. The theory of XFEM is augmented to allow for solution of problems with alternating regimes of static solves for elastic stress conditions and short, dynamic earthquakes on networks of faults. This is accomplished using Nitsche's approach for implementing boundary conditions. Finally, an optimization problem is developed to determine tractions along the fault, enabling the calculation of frictional constraints and the rupture front. This method is verified via a series of static, quasistatic, and dynamic problems. Armed with this technique, we look at several problems regarding geometry within the earthquake cycle in which geometry is crucial. We first look at quasistatic simulations on a community fault model of Southern California, and model slip distribution across that system. We find the distribution of deformation across faults compares reasonably well with slip rates across the region, as constrained by geologic data. We find geometry can provide constraints for friction, and consider the minimization of shear strain across the zone as a function of friction and plate loading direction, and infer bounds on fault strength in the region. Then we consider the repeated rupture problem, modeling the full earthquake cycle over the course of many events on several fault geometries. In this work, we look at distributions of events, studying the effect of geometry on statistical metrics of event ensembles. Finally, this thesis is a proof of concept for the XFEM on earthquake cycle models on fault systems. We identify strengths and weaknesses of the method, and identify places for future improvement. We discuss the feasibility of the method's use in three dimensions, and find the method to be a strong candidate for future crustal deformation simulations.
NASA Technical Reports Server (NTRS)
Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven
1994-01-01
The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.
Fault Tree Analysis as a Planning and Management Tool: A Case Study
ERIC Educational Resources Information Center
Witkin, Belle Ruth
1977-01-01
Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)
NASA Technical Reports Server (NTRS)
Harper, Richard
1989-01-01
In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.
NASA Technical Reports Server (NTRS)
Lee, S. C.; Lollar, Louis F.
1988-01-01
The overall approach currently being taken in the development of AMPERES (Autonomously Managed Power System Extendable Real-time Expert System), a knowledge-based expert system for fault monitoring and diagnosis of space power systems, is discussed. The system architecture, knowledge representation, and fault monitoring and diagnosis strategy are examined. A 'component-centered' approach developed in this project is described. Critical issues requiring further study are identified.
NASA Astrophysics Data System (ADS)
Chiarabba, C.; Giacomuzzi, G.; Piana Agostinetti, N.
2017-12-01
The San Andreas Fault (SAF) near Parkfield is the best known fault section which exhibit a clear transition in slip behavior from stable to unstable. Intensive monitoring and decades of studies permit to identify details of these processes with a good definition of fault structure and subsurface models. Tomographic models computed so far revealed the existence of large velocity contrasts, yielding physical insight on fault rheology. In this study, we applied a recently developed full non-linear tomography method to compute Vp and Vs models which focus on the section of the fault that exhibit fault slip transition. The new tomographic code allows not to impose a vertical seismic discontinuity at the fault position, as routinely done in linearized codes. Any lateral velocity contrast found is directly dictated by the data themselves and not imposed by subjective choices. The use of the same dataset of previous tomographic studies allows a proper comparison of results. We use a total of 861 earthquakes, 72 blasts and 82 shots and the overall arrival time dataset consists of 43948 P- and 29158 S-wave arrival times, accurately selected to take care of seismic anisotropy. Computed Vp and Vp/Vs models, which by-pass the main problems related to linarized LET algorithms, excellently match independent available constraints and show crustal heterogeneities with a high resolution. The high resolution obtained in the fault surroundings permits to infer lateral changes of Vp and Vp/Vs across the fault (velocity gradient). We observe that stable and unstable sliding sections of the SAF have different velocity gradients, small and negligible in the stable slip segment, but larger than 15 % in the unstable slip segment. Our results suggest that Vp and Vp/Vs gradients across the fault control fault rheology and the attitude of fault slip behavior.
Tidal Fluctuations in a Deep Fault Extending Under the Santa Barbara Channel, California
NASA Astrophysics Data System (ADS)
Garven, G.; Stone, J.; Boles, J. R.
2013-12-01
Faults are known to strongly affect deep groundwater flow, and exert a profound control on petroleum accumulation, migration, and natural seafloor seepage from coastal reservoirs within the young sedimentary basins of southern California. In this paper we focus on major fault structure permeability and compressibility in the Santa Barbara Basin, where unique submarine and subsurface instrumentation provide the hydraulic characterization of faults in a structurally complex system. Subsurface geologic logs, geophysical logs, fluid P-T-X data, seafloor seep discharge patterns, fault mineralization petrology, isotopic data, fluid inclusions, and structural models help characterize the hydrogeological nature of faults in this seismically-active and young geologic terrain. Unique submarine gas flow data from a natural submarine seep area of the Santa Barbara Channel help constrain fault permeability k ~ 30 millidarcys for large-scale upward migration of methane-bearing formation fluids along one of the major fault zones. At another offshore site near Platform Holly, pressure-transducer time-series data from a 1.5 km deep exploration well in the South Ellwood Field demonstrate a strong ocean tidal component, due to vertical fault connectivity to the seafloor. Analytical models from classic hydrologic papers by Jacob-Ferris-Bredehoeft-van der Kamp-Wang can be used to extract large-scale fault permeability and compressibility parameters, based on tidal signal amplitude attenuation and phase shift at depth. For the South Ellwood Fault, we estimate k ~ 38 millidarcys (hydraulic conductivity K~ 3.6E-07 m/s) and specific storage coefficient Ss ~ 5.5E-08 m-1. The tidal-derived hydraulic properties also suggest a low effective porosity for the fault zone, n ~ 1 to 3%. Results of forward modeling with 2-D finite element models illustrate significant lateral propagation of the tidal signal into highly-permeable Monterey Formation. The results have important practical implications for fault characterization, petroleum migration, structural diagenesis, and carbon sequestration.
Detection of CMOS bridging faults using minimal stuck-at fault test sets
NASA Technical Reports Server (NTRS)
Ijaz, Nabeel; Frenzel, James F.
1993-01-01
The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.
NASA Technical Reports Server (NTRS)
Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.
2014-01-01
This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of abort triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of abort triggers.
NASA Technical Reports Server (NTRS)
Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.
2014-01-01
This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of Abort Triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of Abort Triggers.
Strike-slip faulting in the Inner California Borderlands, offshore Southern California.
NASA Astrophysics Data System (ADS)
Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.; Sahakian, V. J.; Holmes, J. J.; Klotsko, S.; Kell, A. M.; Wesnousky, S. G.
2015-12-01
In the Inner California Borderlands (ICB), offshore of Southern California, modern dextral strike-slip faulting overprints a prominent system of basins and ridges formed during plate boundary reorganization 30-15 Ma. Geodetic data indicate faults in the ICB accommodate 6-8 mm/yr of Pacific-North American plate boundary deformation; however, the hazard posed by the ICB faults is poorly understood due to unknown fault geometry and loosely constrained slip rates. We present observations from high-resolution and reprocessed legacy 2D multichannel seismic (MCS) reflection datasets and multibeam bathymetry to constrain the modern fault architecture and tectonic evolution of the ICB. We use a sequence stratigraphy approach to identify discrete episodes of deformation in the MCS data and present the results of our mapping in a regional fault model that distinguishes active faults from relict structures. Significant differences exist between our model of modern ICB deformation and existing models. From east to west, the major active faults are the Newport-Inglewood/Rose Canyon, Palos Verdes, San Diego Trough, and San Clemente fault zones. Localized deformation on the continental slope along the San Mateo, San Onofre, and Carlsbad trends results from geometrical complexities in the dextral fault system. Undeformed early to mid-Pleistocene age sediments onlap and overlie deformation associated with the northern Coronado Bank fault (CBF) and the breakaway zone of the purported Oceanside Blind Thrust. Therefore, we interpret the northern CBF to be inactive, and slip rate estimates based on linkage with the Holocene active Palos Verdes fault are unwarranted. In the western ICB, the San Diego Trough fault (SDTF) and San Clemente fault have robust linear geomorphic expression, which suggests that these faults may accommodate a significant portion of modern ICB slip in a westward temporal migration of slip. The SDTF offsets young sediments between the US/Mexico border and the eastern margin of Avalon Knoll, where the fault is spatially coincident and potentially linked with the San Pedro Basin fault (SPBF). Kinematic linkage between the SDTF and the SPBF increases the potential rupture length for earthquakes on either fault and may allow events nucleating on the SDTF to propagate much closer to the LA Basin.
Wastewater injection and slip triggering: Results from a 3D coupled reservoir/rate-and-state model
NASA Astrophysics Data System (ADS)
Babazadeh, M.; Olson, J. E.; Schultz, R.
2017-12-01
Seismicity induced by fluid injection is controlled by parameters related to injection conditions, reservoir properties, and fault frictional behavior. We present results from a combined model that brings together injection physics, reservoir dynamics, and fault physics to better explain the primary controls on induced seismicity. We created a 3D fluid flow simulator using the embedded discrete fracture technique and then coupled it with a 3D displacement discontinuity model that uses rate and state friction to model slip events. The model is composed of three layers, including the top-seal, the injection reservoir, and the basement. Permeability is anisotropic (vertical vs horizontal) and along with porosity varies by layer. Injection control can be either rate or pressure. Fault properties include size, 2D permeability, and frictional properties. Several suites of simulations were run to evaluate the relative importance of each of the factors from all three parameter groups. We find that the injection parameters interact with the reservoir parameters in the context of the fault physics and these relations change for different reservoir and fault characteristics, leading to the need to examine the injection parameters only within the context of a particular faulted reservoir. For a reservoir with no flow boundaries, low permeability (5 md), and a fault with high fault-parallel permeability and critical stress, injection rate exerts the strongest control on magnitude and frequency of earthquakes. However, for a higher permeability reservoir (80 md), injection volume becomes the more important factor. Fault permeability structure is a key factor in inducing earthquakes in basement rocks below the injection reservoir. The initial failure state of the fault, which is challenging to assess, can have a big effect on the size and timing of events. For a fault 2 MPa below critical state, we were able to induce a slip event, but it occurred late in the injection history and was limited to a subset of the fault extent. A case starting at critical stress resulted in a rupture that propagated throughout the entire physical extent of the fault generated a larger magnitude earthquake. This physics-based model can contribute to assessing the risk associated with injection activities and providing guidelines for hazard mitigation.
Discrepant Perspectives on Conflict Situations Among Urban Parent-Adolescent Dyads.
Parker, Elizabeth M; Lindstrom Johnson, Sarah R; Jones, Vanya C; Haynie, Denise L; Cheng, Tina L
2016-03-01
Parents influence urban youths' violence-related behaviors. To provide effective guidance, parents should understand how youth perceive conflict, yet little empirical research has been conducted regarding parent and youth perceptions of conflict. The aims of this article are to (a) report on the nature of discrepancies in attribution of fault, (b) present qualitative data about the varying rationales for fault attribution, and (c) use quantitative data to identify correlates of discrepancy including report of attitudes toward violence, parental communication, and parents' messages about retaliatory violence. Interviews were conducted with 101 parent/adolescent dyads. The study population consisted of African American female caretakers (n = 92; that is, mothers, grandmothers, aunts) and fathers (n = 9) and their early adolescents (mean age = 13.6). A total of 53 dyads were discrepant in identifying instigators in one or both videos. When discrepancy was present, the parent was more likely to identify the actor who reacted to the situation as at fault. In the logistic regression models, parental attitudes about retaliatory violence were a significant correlate of discrepancy, such that as parent attitudes supporting retaliatory violence increased, the odds of discrepancy decreased. The results suggest that parents and adolescents do not always view conflict situations similarly, which may inhibit effective parent-child communication, parental advice, and discipline. Individuals developing and implementing family-based violence prevention interventions need to be cognizant of the complexity of fault attribution and design strategies to promote conversations around attribution of fault and effective conflict management. © The Author(s) 2014.
Discrepant Perspectives On Conflict Situations Among Urban Parent-Adolescent Dyads
Parker, Elizabeth M.; Lindstrom Johnson, Sarah R.; Jones, Vanya C.; Haynie, Denise L.; Cheng, Tina L.
2015-01-01
Parents influence urban youths’ violence-related behaviors. To provide effective guidance, parents should understand how youth perceive conflict, yet little empirical research has been conducted regarding parent and youth perceptions of conflict. The aims of this paper were to: (1) report on the nature of discrepancies in attribution of fault, (2) present qualitative data about the varying rationales for fault attribution, and (3) use quantitative data to identify correlates of discrepancy including report of attitudes towards violence, parental communication, and parents’ messages about retaliatory violence. Interviews were conducted with 101 parent/adolescent dyads. The study population consisted of African American female caretakers (n= 92; i.e., mothers, grandmothers, aunts) and fathers (n=9) and their early adolescents (mean age=13.6). A total of 53 dyads were discrepant in identifying instigators in one or both videos. When discrepancy was present, the parent was more likely to identify the actor who reacted to the situation as at fault. In the logistic regression models, parental attitudes about retaliatory violence were a significant correlate of discrepancy, such that as parent attitudes supporting retaliatory violence increased the odds of discrepancy decreased. The results suggest that parents and adolescents do not always view conflict situations similarly, which may inhibit effective parent-child communication, parental advice, and discipline. Individuals developing and implementing family-based violence prevention interventions need to be cognizant of the complexity of fault attribution and design strategies to promote conversations around attribution of fault and effective conflict management. PMID:25535252
The 2014 update to the National Seismic Hazard Model in California
Powers, Peter; Field, Edward H.
2015-01-01
The 2014 update to the U. S. Geological Survey National Seismic Hazard Model in California introduces a new earthquake rate model and new ground motion models (GMMs) that give rise to numerous changes to seismic hazard throughout the state. The updated earthquake rate model is the third version of the Uniform California Earthquake Rupture Forecast (UCERF3), wherein the rates of all ruptures are determined via a self-consistent inverse methodology. This approach accommodates multifault ruptures and reduces the overprediction of moderate earthquake rates exhibited by the previous model (UCERF2). UCERF3 introduces new faults, changes to slip or moment rates on existing faults, and adaptively smoothed gridded seismicity source models, all of which contribute to significant changes in hazard. New GMMs increase ground motion near large strike-slip faults and reduce hazard over dip-slip faults. The addition of very large strike-slip ruptures and decreased reverse fault rupture rates in UCERF3 further enhances these effects.
Software-implemented fault insertion: An FTMP example
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1987-01-01
This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.
A new model for the initiation, crustal architecture, and extinction of pull-apart basins
NASA Astrophysics Data System (ADS)
van Wijk, J.; Axen, G. J.; Abera, R.
2015-12-01
We present a new model for the origin, crustal architecture, and evolution of pull-apart basins. The model is based on results of three-dimensional upper crustal numerical models of deformation, field observations, and fault theory, and answers many of the outstanding questions related to these rifts. In our model, geometric differences between pull-apart basins are inherited from the initial geometry of the strike-slip fault step which results from early geometry of the strike-slip fault system. As strike-slip motion accumulates, pull-apart basins are stationary with respect to underlying basement and the fault tips may propagate beyond the rift basin. Our model predicts that the sediment source areas may thus migrate over time. This implies that, although pull-apart basins lengthen over time, lengthening is accommodated by extension within the pull-apart basin, rather than formation of new faults outside of the rift zone. In this aspect pull-apart basins behave as narrow rifts: with increasing strike-slip the basins deepen but there is no significant younging outward. We explain why pull-apart basins do not go through previously proposed geometric evolutionary stages, which has not been documented in nature. Field studies predict that pull-apart basins become extinct when an active basin-crossing fault forms; this is the most likely fate of pull-apart basins, because strike-slip systems tend to straighten. The model predicts what the favorable step-dimensions are for the formation of such a fault system, and those for which a pull-apart basin may further develop into a short seafloor-spreading ridge. The model also shows that rift shoulder uplift is enhanced if the strike-slip rate is larger than the fault-propagation rate. Crustal compression then contributes to uplift of the rift flanks.
NASA Astrophysics Data System (ADS)
Elbanna, A. E.
2015-12-01
The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.
NASA Astrophysics Data System (ADS)
Dixon, Timothy H.; Xie, Surui
2018-07-01
The Eastern California shear zone in the Mojave Desert, California, accommodates nearly a quarter of Pacific-North America plate motion. In south-central Mojave, the shear zone consists of six active faults, with the central Calico fault having the fastest slip rate. However, faults to the east of the Calico fault have larger total offsets. We explain this pattern of slip rate and total offset with a model involving a crustal block (the Mojave Block) that migrates eastward relative to a shear zone at depth whose position and orientation is fixed by the Coachella segment of the San Andreas fault (SAF), southwest of the transpressive "big bend" in the SAF. Both the shear zone and the Garlock fault are assumed to be a direct result of this restraining bend, and consequent strain redistribution. The model explains several aspects of local and regional tectonics, may apply to other transpressive continental plate boundary zones, and may improve seismic hazard estimates in these zones.
A.P. Lamb,; L.M. Liberty,; Blakely, Richard J.; Pratt, Thomas L.; Sherrod, B.L.; Van Wijk, K.
2012-01-01
We present evidence that the Seattle fault zone of Washington State extends to the west edge of the Puget Lowland and is kinemati-cally linked to active faults that border the Olympic Massif, including the Saddle Moun-tain deformation zone. Newly acquired high-resolution seismic reflection and marine magnetic data suggest that the Seattle fault zone extends west beyond the Seattle Basin to form a >100-km-long active fault zone. We provide evidence for a strain transfer zone, expressed as a broad set of faults and folds connecting the Seattle and Saddle Mountain deformation zones near Hood Canal. This connection provides an explanation for the apparent synchroneity of M7 earthquakes on the two fault systems ~1100 yr ago. We redefi ne the boundary of the Tacoma Basin to include the previously termed Dewatto basin and show that the Tacoma fault, the southern part of which is a backthrust of the Seattle fault zone, links with a previously unidentifi ed fault along the western margin of the Seattle uplift. We model this north-south fault, termed the Dewatto fault, along the western margin of the Seattle uplift as a low-angle thrust that initiated with exhu-mation of the Olympic Massif and today accommodates north-directed motion. The Tacoma and Dewatto faults likely control both the southern and western boundaries of the Seattle uplift. The inferred strain trans-fer zone linking the Seattle fault zone and Saddle Mountain deformation zone defi nes the northern margin of the Tacoma Basin, and the Saddle Mountain deformation zone forms the northwestern boundary of the Tacoma Basin. Our observations and model suggest that the western portions of the Seattle fault zone and Tacoma fault are com-plex, require temporal variations in principal strain directions, and cannot be modeled as a simple thrust and/or backthrust system.
Fault Management Architectures and the Challenges of Providing Software Assurance
NASA Technical Reports Server (NTRS)
Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek
2015-01-01
Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most missions is system complexity due to a need to establish a multi-dimensional structure across hardware, software and spacecraft operations. FM is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. Generally, FM architecture, implementation, and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (V&V) is challenging. A breakout session at the 2012 NASA Independent Verification & Validation (IV&V) Annual Workshop titled "V&V of Fault Management: Challenges and Successes" exposed this issue in terms of V&V for a representative set of architectures. NASA's Software Assurance Research Program (SARP) has provided funds to NASA IV&V to extend the work performed at the Workshop session in partnership with NASA's Jet Propulsion Laboratory (JPL). NASA IV&V will extract FM architectures across the IV&V portfolio and evaluate the data set, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This SARP initiative focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures and associated V&V/IV&V techniques provides a data set that can enable improved assurance that a system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the space community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research.
Fault Management Architectures and the Challenges of Providing Software Assurance
NASA Technical Reports Server (NTRS)
Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek
2015-01-01
The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research including identification of FM architectures, visibility observations, and methods utilized for VVIVV.
Optimal fault-tolerant control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2017-10-01
For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.
Stress field modelling from digital geological map data
NASA Astrophysics Data System (ADS)
Albert, Gáspár; Barancsuk, Ádám; Szentpéteri, Krisztián
2016-04-01
To create a model for the lithospheric stress a functional geodatabase is required which contains spatial and geodynamic parameters. A digital structural-geological map is a geodatabase, which usually contains enough attributes to create a stress field model. Such a model is not accurate enough for engineering-geological purposes because simplifications are always present in a map, but in many cases maps are the only sources for a tectonic analysis. The here presented method is designed for field geologist, who are interested to see the possible realization of the stress field over the area, on which they are working. This study presents an application which can produce a map of 3D stress vectors from a kml-file. The core application logic is implemented on top of a spatially aware relational database management system. This allows rapid and geographically accurate analysis of the imported geological features, taking advantage of standardized spatial algorithms and indexing. After pre-processing the map features in a GIS, according to the Type-Property-Orientation naming system, which was described in a previous study (Albert et al. 2014), the first stage of the algorithm generates an irregularly spaced point cloud by emitting a pattern of points within a user-defined buffer zone around each feature. For each point generated, a component-wise approximation of the tensor field at the point's position is computed, derived from the original feature's geodynamic properties. In a second stage a weighted moving average method calculates the stress vectors in a regular grid. Results can be exported as geospatial data for further analysis or cartographic visualization. Computation of the tensor field's components is based on the implementation of the Mohr diagram of a compressional model, which uses a Coulomb fracture criterion. Using a general assumption that the main principal stress must be greater than the stress from the overburden, the differential stress is calculated from the fracture criterion. The calculation includes the gravitational acceleration, the average density of rocks and the experimental 60 degree of the fracture angle from the normal of the fault plane. This way, the stress tensors are calculated as absolute pressure values per square meters on both sides of the faults. If the stress from the overburden is greater than 1 bar (i.e. the faults are buried), a confined compression would be present. Modelling this state of stress may result a confusing pattern of vectors, because in a confined position the horizontal stress vectors may point towards structures primarily associated with extension. To step over this, and to highlight the variability in the stress-field, the model calculates the vectors directly from the differential stress (practically subtracting the minimum principal stress from the critical stress). The result of the modelling is a vector map, which theoretically represents the minimum tectonic pressure in the moment, when the rock body breaks from an initial state. This map - together with the original fault-map - is suitable for determining those areas where unrevealed tectonic, sedimentary and lithological structures are possibly present (e.g. faults, sub-basins and intrusions). With modelling different deformational phases on the same area, change of the stress vectors can be detected which reveals not only the varying directions of the principal stresses, but the tectonic-driven sedimentation patterns too. The decrease of necessary critical stress in the case of a possible reactivation of a fault in subsequent deformation phase can be managed with the down-ranking of the concerning structural elements. Reference: Albert G., Ungvári ZS., Szentpéteri K. 2014: Modeling the present day stress field of the Pannonian Basin from neotectonic maps - In: Beqiraj A, Ionescu C, Christofides G, Uta A, Beqiraj Goga E, Marku S (eds.) Proceedings XX Congress of the Carpathian-Balkan Geological Association. Tirana: p. 2.
NASA Astrophysics Data System (ADS)
Li, C. H.; Wu, L. C.; Chan, P. C.; Lin, M. L.
2016-12-01
The National Highway No. 3 - Tianliao III Bridge is located in the southwestern Taiwan mudstone area and crosses the Chekualin fault. Since the bridge was opened to traffic, it has been repaired 11 times. To understand the interaction behavior between thrust faulting and the bridge, a discrete element method-based software program, PFC, was applied to conduct a numerical analysis. A 3D model for simulating the thrust faulting and bridge was established, as shown in Fig. 1. In this conceptual model, the length and width were 50 and 10 m, respectively. Part of the box bottom was moveable, simulating the displacement of the thrust fault. The overburden stratum had a height of 5 m with fault dip angles of 20° (Fig. 2). The bottom-up strata were mudstone, clay, and sand, separately. The uplift was 1 m, which was 20% of the stratum thickness. In accordance with the investigation, the position of the fault tip was set, depending on the fault zone, and the bridge deformation was observed (Fig. 3). By setting "Monitoring Balls" in the numerical model to analyzes bridge displacement, we determined that the bridge deck deflection increased as the uplift distance increased. Furthermore, the force caused by the loading of the bridge deck and fault dislocation was determined to cause a down deflection of the P1 and P2 bridge piers. Finally, the fault deflection trajectory of the P4 pier displayed the maximum displacement (Fig. 4). Similar behavior has been observed through numerical simulation as well as field monitoring data. Usage of the discrete element model (PFC3D) to simulate the deformation behavior between thrust faulting and the bridge provided feedback for the design and improved planning of the bridge.
NASA Astrophysics Data System (ADS)
Persaud, P.; Ma, Y.; Stock, J. M.; Hole, J. A.; Fuis, G. S.; Han, L.
2016-12-01
Ongoing oblique slip at the Pacific-North America plate boundary in the Salton Trough produced the Imperial Valley. Deformation in this seismically active area is distributed across a complex network of exposed and buried faults resulting in a largely unmapped seismic hazard beneath the growing population centers of El Centro, Calexico and Mexicali. To better understand the shallow crustal structure in this region and the connectivity of faults and seismicity lineaments, we used data primarily from the Salton Seismic Imaging Project (SSIP) to construct a P-wave velocity profile to 15 km depth, and a 3-D velocity model down to 8 km depth including the Brawley Geothermal area. We obtained detailed images of a complex wedge-shaped basin at the southern end of the San Andreas Fault system. Two deep subbasins (VP <5.65 km/s) are located in the western part of the larger Imperial Valley basin, where seismicity trends and active faults play a significant role in shaping the basin edge. Our 3-D VP model reveals previously unrecognized NE-striking cross faults that are interacting with the dominant NW-striking faults to control deformation. New findings in our profile include localized regions of low VP (thickening of a 5.65-5.85 km/s layer) near faults or seismicity lineaments interpreted as possibly faulting-related. Our 3-D model and basement map reveal velocity highs associated with the geothermal areas in the eastern valley. The improved seismic velocity model from this study, and the identification of important unmapped faults or buried interfaces will help refine the seismic hazard for parts of Imperial County, California.
NASA Astrophysics Data System (ADS)
Bergh, Steffen; Sylvester, Arthur; Damte, Alula; Indrevær, Kjetil
2014-05-01
The San Andreas fault in southern California records only few large-magnitude earthquakes in historic time, and the recent activity is confined primarily on irregular and discontinuous strike-slip and thrust fault strands at shallow depths of ~5-20 km. Despite this fact, slip along the San Andreas fault is calculated to c. 35 mm/yr based on c.160 km total right lateral displacement for the southern segment of the fault in the last c. 8 Ma. Field observations also reveal complex fault strands and multiple events of deformation. The presently diffuse high-magnitude crustal movements may be explained by the deformation being largely distributed along more gently dipping reverse faults in fold-thrust belts, in contrast to regions to the north where deformation is less partitioned and localized to narrow strike-slip fault zones. In the Mecca Hills of the Salton trough transpressional deformation of an uplifted segment of the San Andreas fault in the last ca. 4.0 My is expressed by very complex fault-oblique and fault-parallel (en echelon) folding, and zones of uplift (fold-thrust belts), basement-involved reverse and strike-slip faults and accompanying multiple and pervasive cataclasis and conjugate fracturing of Miocene to Pleistocene sedimentary strata. Our structural analysis of the Mecca Hills addresses the kinematic nature of the San Andreas fault and mechanisms of uplift and strain-stress distribution along bent fault strands. The San Andreas fault and subsidiary faults define a wide spectrum of kinematic styles, from steep localized strike-slip faults, to moderate dipping faults related to oblique en echelon folds, and gently dipping faults distributed in fold-thrust belt domains. Therefore, the San Andreas fault is not a through-going, steep strike-slip crustal structure, which is commonly the basis for crustal modeling and earthquake rupture models. The fault trace was steep initially, but was later multiphase deformed/modified by oblique en echelon folding, renewed strike-slip movements and contractile fold-thrust belt structures. Notably, the strike-slip movements on the San Andreas fault were transformed outward into the surrounding rocks as oblique-reverse faults to link up with the subsidiary Skeleton Canyon fault in the Mecca Hills. Instead of a classic flower structure model for this transpressional uplift, the San Andreas fault strands were segmented into domains that record; (i) early strike-slip motion, (ii) later oblique shortening with distributed deformation (en echelon fold domains), followed by (iii) localized fault-parallel deformation (strike-slip) and (iv) superposed out-of-sequence faulting and fault-normal, partitioned deformation (fold-thrust belt domains). These results contribute well to the question if spatial and temporal fold-fault branching and migration patterns evolving along non-vertical strike-slip fault segments can play a role in the localization of earthquakes along the San Andreas fault.
NASA Astrophysics Data System (ADS)
De Cristofaro, J. L.; Polet, J.
2017-12-01
The Hilton Creek Fault (HCF) is a range-bounding extensional fault that forms the eastern escarpment of California's Sierra Nevada mountain range, near the town of Mammoth Lakes. The fault is well mapped along its main trace to the south of the Long Valley Caldera (LVC), but the location and nature of its northern terminus is poorly constrained. The fault terminates as a series of left-stepping splays within the LVC, an area of active volcanism that most notably erupted 760 ka, and currently experiences continuous geothermal activity and sporadic earthquake swarms. The timing of the most recent motion on these fault splays is debated, as is the threat posed by this section of the Hilton Creek Fault. The Third Uniform California Earthquake Rupture Forecast (UCERF3) model depicts the HCF as a single strand projecting up to 12km into the LVC. However, Bailey (1989) and Hill and Montgomery-Brown (2015) have argued against this model, suggesting that extensional faulting within the Caldera has been accommodated by the ongoing volcanic uplift and thus the intracaldera section of the HCF has not experienced motion since 760ka.We intend to map the intracaldera fault splays and model their subsurface characteristics to better assess their rupture history and potential. This will be accomplished using high-resolution topography and subsurface geophysical methods, including ground-based magnetics. Preliminary work was performed using high-precision Nikon Nivo 5.C total stations to generate elevation profiles and a backpack mounted GEM GS-19 proton precession magnetometer. The initial results reveal a correlation between magnetic anomalies and topography. East-West topographic profiles show terrace-like steps, sub-meter in height, which correlate to changes in the magnetic data. Continued study of the magnetic data using Oasis Montaj 3D modeling software is planned. Additionally, we intend to prepare a high-resolution terrain model using structure-from-motion techniques derived from imagery acquired by an unmanned aerial vehicle and ground control points measured with realtime kinematic GPS receivers. This terrain model will be combined with subsurface geophysical data to form a comprehensive model of the subsurface.
Finite element models of earthquake cycles in mature strike-slip fault zones
NASA Astrophysics Data System (ADS)
Lynch, John Charles
The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a significant roll in the variability of earthquake repeat times. Specifically, small perturbations in the model parameters can lead to results similar to such observed phenomena as earthquake clustering and disruptions to so-called "characteristic" earthquake cycles.
Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang
2014-01-01
A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197
NASA Astrophysics Data System (ADS)
Wilson, J.; Wetmore, P. H.; Malservisi, R.; Ferwerda, B. P.; Teran, O.
2012-12-01
We use recently collected slip vector and total offset data from the Agua Blanca fault (ABF) to constrain a pixel translation digital elevation model (DEM) to reconstruct the slip history of this fault. This model was constructed using a Perl script that reads a DEM file (Easting, Northing, Elevation) and a configuration file with coordinates that define the boundary of each fault segment. A pixel translation vector is defined as a magnitude of lateral offset in an azimuthal direction. The program translates pixels north of the fault and prints their pre-faulting position to a new DEM file that can be gridded and displayed. This analysis, where multiple DEMs are created with different translation vectors, allows us to identify areas of transtension or transpression while seeing the topographic expression in these areas. The benefit of this technique, in contrast to a simple block model, is that the DEM gives us a valuable graphic which can be used to pose new research questions. We have found that many topographic features correlate across the fault, i.e. valleys and ridges, which likely have implications for the age of the ABF, long term landscape evolution rates, and potentially provide conformation for total slip assessments The ABF of northern Baja California, Mexico is an active, dextral strike slip fault that transfers Pacific-North American plate boundary strain out of the Gulf of California and around the "Big Bend" of the San Andreas Fault. Total displacement on the ABF in the central and eastern parts of the fault is 10 +/- 2 km based on offset Early-Cretaceous features such as terrane boundaries and intrusive bodies (plutons and dike swarms). Where the fault bifurcates to the west, the northern strand (northern Agua Blanca fault or NABF) is constrained to 7 +/- 1 km. We have not yet identified piercing points on the southern strand, the Santo Tomas fault (STF), but displacement is inferred to be ~4 km assuming that the sum of slip on the NABF and STF is approximately equal to that to the east. The ABF has varying kinematics along strike due to changes in trend of the fault with respect to the nearly east-trending displacement vector of the Ensenada Block to the north of the fault relative to a stable Baja Microplate to the south. These kinematics include nearly pure strike slip in the central portion of the ABF where the fault trends nearly E-W, and minor components of normal dip-slip motion on the NABF and eastern sections of the fault where the trends become more northerly. A pixel translation vector parallel to the trend of the ABF in the central segment (290 deg, 10.5 km) produces kinematics consistent with those described above. The block between the NABF and STF has a pixel translation vector parallel the STF (291 deg, 3.5 km). We find these vectors are consistent with the kinematic variability of the fault system and realign several major drainages and ridges across the fault. This suggests these features formed prior to faulting, and they yield preferred values of offset: 10.5 km on the ABF, 7 km on the NABF and 3.5 km on the STF. This model is consistent with the kinematic model proposed by Hamilton (1971) in which the ABF is a transform fault, linking extensional regions of Valle San Felipe and the Continental Borderlands.
NASA Astrophysics Data System (ADS)
Bistacchi, A.; Mittempergher, S.; Di Toro, G.; Smith, S. A. F.; Garofalo, P. S.
2017-12-01
The 600 m-thick, strike slip Gole Larghe Fault Zone (GLFZ) experienced several hundred seismic slip events at c. 8 km depth, well-documented by numerous pseudotachylytes, was then exhumed and is now exposed in beautiful and very continuous outcrops. The fault zone was also characterized by hydrous fluid flow during the seismic cycle, demonstrated by alteration halos and precipitation of hydrothermal minerals in veins and cataclasites. We have characterized the GLFZ with > 2 km of scanlines and semi-automatic mapping of faults and fractures on several photogrammetric 3D Digital Outcrop Models (3D DOMs). This allowed us obtaining 3D Discrete Fracture Network (DFN) models, based on robust probability density functions for parameters of fault and fracture sets, and simulating the fault zone hydraulic properties. In addition, the correlation between evidences of fluid flow and the fault/fracture network parameters have been studied with a geostatistical approach, allowing generating more realistic time-varying permeability models of the fault zone. Based on this dataset, we have developed a FEM hydraulic model of the GLFZ for a period of some tens of years, covering one seismic event and a postseismic period. The higher permeability is attained in the syn- to early post-seismic period, when fractures are (re)opened by off-fault deformation, then permeability decreases in the postseismic due to fracture sealing. The flow model yields a flow pattern consistent with the observed alteration/mineralization pattern and a marked channelling of fluid flow in the inner part of the fault zone, due to permeability anisotropy related to the spatial arrangement of different fracture sets. Amongst possible seismological applications of our study, we will discuss the possibility to evaluate the coseismic fracture intensity due to off-fault damage, and the heterogeneity and evolution of mechanical parameters due to fluid-rock interaction.
NASA Astrophysics Data System (ADS)
Hoprich, M.; Decker, K.; Grasemann, B.; Sokoutis, D.; Willingshofer, E.
2009-04-01
Former analog modeling on pull-apart basins dealt with different sidestep geometries, the symmetry and ratio between velocities of moving blocks, the ratio between ductile base and model thickness, the ratio between fault stepover and model thickness and their influence on basin evolution. In all these models the pull-apart basin is deformed over an even detachment. The Vienna basin, however, is considered a classical thin-skinned pull-apart with a rather peculiar basement structure. Deformation and basin evolution are believed to be limited to the brittle upper crust above the Alpine-Carpathian floor thrust. The latter is not a planar detachment surface, but has a ramp-shaped topography draping the underlying former passive continental margin. In order to estimate the effects of this special geometry, nine experiments were accomplished and the resulting structures were compared with the Vienna basin. The key parameters for the models (fault and basin geometry, detachment depth and topography) were inferred from a 3D GoCad model of the natural Vienna basin, which was compiled from seismic, wells and geological cross sections. The experiments were scaled 1:100.000 ("Ramberg-scaling" for brittle rheology) and built of quartz sand (300 µm grain size). An average depth of 6 km (6 cm) was calculated for the basal detachment, distances between the bounding strike-slip faults of 40 km (40 cm) and a finite length of the natural basin of 200 km were estimated (initial model length: 100 cm). The following parameters were changed through the experimental process: (1) syntectonic sedimentation; (2) the stepover angle between bounding strike slip faults and basal velocity discontinuity; (3) moving of one or both fault blocks (producing an asymmetrical or symmetrical basin); (4) inclination of the basal detachment surface by 5°; (6) installation of 2 and 3 ramp systems at the detachment; (7) simulation of a ductile detachment through a 0.4 cm thick PDMS layer at the basin floor. The surface of the model was photographed after each deformation increment through the experiment. Pictures of serial cross sections cut through the models in their final state every 4 cm were also taken and interpreted. The formation of en-echelon normal faults with relay ramps is observed in all models. These faults are arranged in an acute angle to the basin borders, according to a Riedel-geometry. In the case of an asymmetric basin they emerge within the non-moving fault block. Substantial differences between the models are the number, the distance and the angle of these Riedel faults, the length of the bounding strike-slip faults and the cross basin symmetry. A flat detachment produces straight fault traces, whereas inclined detachments (or inclined ramps) lead to "bending" of the normal faults, rollover and growth strata thickening towards the faults. Positions and the sizes of depocenters also vary, with depocenters preferably developing above ramp-flat-transitions. Depocenter thicknesses increase with ramp heights. A similar relation apparently exists in the natural Vienna basin, which shows ramp-like structures in the detachment just underneath large faults like the Steinberg normal fault and the associated depocenters. The 3-ramp-model also reveals segmentation of the basin above the lowermost ramp. The evolving structure is comparable to the Wiener Neustadt sub-basin in the southern part of the Vienna basin, which is underlain by a topographical high of the detachment. Cross sections through the ductile model show a strong disintergration into a horst-and-graben basin. The thin silicon putty base influences the overlying strata in a way that the basin - unlike the "dry" sand models - becomes very flat and shallow. The top view shows an irregular basin shape and no rhombohedral geometry, which characterises the Vienna basin. The ductile base also leads to a symmetrical distribution of deformation on both fault blocks, even though only one fault block is moved. The stepover angle, the influence of gravitation in a ramp or inclined system and the strain accomodation by a viscous silicone layer can be summarized as factors controlling the characteristics of the models.
NASA Astrophysics Data System (ADS)
Lin, Y. K.; Ke, M. C.; Ke, S. S.
2016-12-01
An active fault is commonly considered to be active if they have moved one or more times in the last 10,000 years and likely to have another earthquake sometime in the future. The relationship between the fault reactivation and the surface deformation after the Chi-Chi earthquake (M=7.2) in 1999 has been concerned up to now. According to the investigations of well-known disastrous earthquakes in recent years, indicated that surface deformation is controlled by the 3D fault geometric shape. Because the surface deformation may cause dangerous damage to critical infrastructures, buildings, roads, power, water and gas lines etc. Therefore it's very important to make pre-disaster risk assessment via the 3D active fault model to decrease serious economic losses, people injuries and deaths caused by large earthquake. The approaches to build up the 3D active fault model can be categorized as (1) field investigation (2) digitized profile data and (3) build the 3D modeling. In this research, we tracked the location of the fault scarp in the field first, then combined the seismic profiles (had been balanced) and historical earthquake data to build the underground fault plane model by using SKUA-GOCAD program. Finally compared the results come from trishear model (written by Richard W. Allmendinger, 2012) and PFC-3D program (Itasca) and got the calculated range of the deformation area. By analysis of the surface deformation area made from Hsin-Chu Fault, we concluded the result the damage zone is approaching 68 286m, the magnitude is 6.43, the offset is 0.6m. base on that to estimate the population casualties, building damage by the M=6.43 earthquake in Hsin-Chu area, Taiwan. In the future, in order to be applied accurately on earthquake disaster prevention, we need to consider further the groundwater effect and the soil structure interaction inducing by faulting.
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing
2012-12-14
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of
New methods for the condition monitoring of level crossings
NASA Astrophysics Data System (ADS)
García Márquez, Fausto Pedro; Pedregal, Diego J.; Roberts, Clive
2015-04-01
Level crossings represent a high risk for railway systems. This paper demonstrates the potential to improve maintenance management through the use of intelligent condition monitoring coupled with reliability centred maintenance (RCM). RCM combines advanced electronics, control, computing and communication technologies to address the multiple objectives of cost effectiveness, improved quality, reliability and services. RCM collects digital and analogue signals utilising distributed transducers connected to either point-to-point or digital bus communication links. Assets in many industries use data logging capable of providing post-failure diagnostic support, but to date little use has been made of combined qualitative and quantitative fault detection techniques. The research takes the hydraulic railway level crossing barrier (LCB) system as a case study and develops a generic strategy for failure analysis, data acquisition and incipient fault detection. For each barrier the hydraulic characteristics, the motor's current and voltage, hydraulic pressure and the barrier's position are acquired. In order to acquire the data at a central point efficiently, without errors, a distributed single-cable Fieldbus is utilised. This allows the connection of all sensors through the project's proprietary communication nodes to a high-speed bus. The system developed in this paper for the condition monitoring described above detects faults by means of comparing what can be considered a 'normal' or 'expected' shape of a signal with respect to the actual shape observed as new data become available. ARIMA (autoregressive integrated moving average) models were employed for detecting faults. The statistical tests known as Jarque-Bera and Ljung-Box have been considered for testing the model.
NASA Astrophysics Data System (ADS)
Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio
2016-04-01
Fault-related folding kinematic models are widely used to explain accommodation of crustal shortening. These models, however, include simplifications, such as the assumption of constant growth rate of faults. This value sometimes is not constant in isotropic materials, and even more variable if one considers naturally anisotropic geological systems. , This means that these simplifications could lead to incorrect interpretations of the reality. In this study, we use analogue models to evaluate how thin, mechanical discontinuities, such as beddings or thin weak layers, influence the propagation of reverse faults and related folds. The experiments are performed with two different settings to simulate initially-blind master faults dipping at 30° and 45°. The 30° dip represents one of the Andersonian conjugate fault, and 45° dip is very frequent in positive reactivation of normal faults. The experimental apparatus consists of a clay layer placed above two plates: one plate, the footwall, is fixed; the other one, the hanging wall, is mobile. Motor-controlled sliding of the hanging wall plate along an inclined plane reproduces the reverse fault movement. We run thirty-six experiments: eighteen with dip of 30° and eighteen with dip of 45°. For each dip-angle setting, we initially run isotropic experiments that serve as a reference. Then, we run the other experiments with one or two discontinuities (horizontal precuts performed into the clay layer). We monitored the experiments collecting side photographs every 1.0 mm of displacement of the master fault. These images have been analyzed through PIVlab software, a tool based on the Digital Image Correlation method. With the "displacement field analysis" (one of the PIVlab tools) we evaluated, the variation of the trishear zone shape and how the master-fault tip and newly-formed faults propagate into the clay medium. With the "strain distribution analysis", we observed the amount of the on-fault and off-fault deformation with respect to the faulting pattern and evolution. Secondly, using MOVE software, we extracted the positions of fault tips and folds every 5 mm of displacement on the master fault. Analyzing these positions in all of the experiments, we found that the growth rate of the faults and the related fold shape vary depending on the number of discontinuities in the clay medium. Other results can be summarized as follows: 1) the fault growth rate is not constant, but varies especially while the new faults interacts with precuts; 2) the new faults tend to crosscut the discontinuities when the angle between them is approximately 90°; 3) the trishear zone change its shape during the experiments especially when the main fault interacts with the discontinuities.
Transient cnoidal waves explain the formation and geometry of fault damage zones
NASA Astrophysics Data System (ADS)
Veveakis, Manolis; Schrank, Christoph
2017-04-01
The spatial footprint of a brittle fault is usually dominated by a wide area of deformation bands and fractures surrounding a narrow, highly deformed fault core. This diffuse damage zone relates to the deformation history of a fault, including its seismicity, and has a significant impact on flow and mechanical properties of faulted rock. Here, we propose a new mechanical model for damage-zone formation. It builds on a novel mathematical theory postulating fundamental material instabilities in solids with internal mass transfer associated with volumetric deformation due to elastoviscoplastic p-waves termed cnoidal waves. We show that transient cnoidal waves triggered by fault slip events can explain the characteristic distribution and extent of deformation bands and fractures within natural fault damage zones. Our model suggests that an overpressure wave propagating away from the slipping fault and the material properties of the host rock control damage-zone geometry. Hence, cnoidal-wave theory may open a new chapter for predicting seismicity, material and geometrical properties as well as the location of brittle faults.
Palaeostress perturbations near the El Castillo de las Guardas fault (SW Iberian Massif)
NASA Astrophysics Data System (ADS)
García-Navarro, Encarnación; Fernández, Carlos
2010-05-01
Use of stress inversion methods on faults measured at 33 sites located at the northwestern part of the South Portuguese Zone (Variscan Iberian Massif), and analysis of the basic dyke attitude at this same region, has revealed a prominent perturbation of the stress trajectories around some large, crustal-scale faults, like the El Castillo de las Guardas fault. The results are compared with the predictions of theoretical models of palaeostress deviations near master faults. According to this comparison, the El Castillo de las Guardas fault, an old structure that probably reversed several times its slip sense, can be considered as a sinistral strike-slip fault during the Moscovian. These results also point out the main shortcomings that still hinder a rigorous quantitative use of the theoretical models of stress perturbations around major faults: the spatial variation in the parameters governing the brittle behaviour of the continental crust, and the possibility of oblique slip along outcrop-scale faults in regions subjected to general, non-plane strain.
NASA Astrophysics Data System (ADS)
Jiang, Zhongshan; Yuan, Linguo; Huang, Dingfa; Yang, Zhongrong; Chen, Weifeng
2017-12-01
We reconstruct two types of fault models associated with the 2008 Mw 7.9 Wenchuan earthquake, one is a listric fault connecting a shallowing sub-horizontal detachment below ∼20 km depth (fault model one, FM1) and the other is a group of more steeply dipping planes further extended to the Moho at ∼60 km depth (fault model two, FM2). Through comparative analysis of the coseismic inversion results, we confirm that the coseismic models are insensitive to the above two type fault geometries. We therefore turn our attention to the postseismic deformation obtained from GPS observations, which can not only impose effective constraints on the fault geometry but also, more importantly, provide valuable insights into the postseismic afterslip. Consequently, FM1 performs outstandingly in the near-, mid-, and far-field, whether considering the viscoelastic influence or not. FM2 performs more poorly, especially in the data-model consistency in the near field, which mainly results from the trade-off of the sharp contrast of the postseismic deformation on both sides of the Longmen Shan fault zone. Accordingly, we propose a listric fault connecting a shallowing sub-horizontal detachment as the optimal fault geometry for the Wenchuan earthquake. Based on the inferred optimal fault geometry, we analyse two characterized postseismic deformation phenomena that differ from the coseismic patterns: (1) the postseismic opposite deformation between the Beichuan fault (BCF) and Pengguan fault (PGF) and (2) the slightly left-lateral strike-slip motions in the southwestern Longmen Shan range. The former is attributed to the local left-lateral strike-slip and normal dip-slip components on the shallow BCF. The latter places constraints on the afterslip on the southwestern BCF and reproduces three afterslip concentration areas with slightly left-lateral strike-slip motions. The decreased Coulomb Failure Stress (CFS) change ∼0.322 KPa, derived from the afterslip with viscoelastic influence removed at the hypocentre of the Lushan earthquake, indicates that the postseismic left-lateral strike-slip and normal dip-slip motions may have a mitigative effect on the fault loading in the southwestern Longmen Shan range. Nevertheless, it is much smaller than the total increased CFS changes (∼8.368 KPa) derived from the coseismic and viscoelastic deformations.
A design approach for ultrareliable real-time systems
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Harper, Richard E.; Alger, Linda S.
1991-01-01
A design approach developed over the past few years to formalize redundancy management and validation is described. Redundant elements are partitioned into individual fault-containment regions (FCRs). An FCR is a collection of components that operates correctly regardless of any arbitrary logical or electrical fault outside the region. Conversely, a fault in an FCR cannot cause hardware outside the region to fail. The outputs of all channels are required to agree bit-for-bit under no-fault conditions (exact bitwise consensus). Synchronization, input agreement, and input validity conditions are discussed. The Advanced Information Processing System (AIPS), which is a fault-tolerant distributed architecture based on this approach, is described. A brief overview of recent applications of these systems and current research is presented.
Fault-tolerant continuous flow systems modelling
NASA Astrophysics Data System (ADS)
Tolbi, B.; Tebbikh, H.; Alla, H.
2017-01-01
This paper presents a structural modelling of faults with hybrid Petri nets (HPNs) for the analysis of a particular class of hybrid dynamic systems, continuous flow systems. HPNs are first used for the behavioural description of continuous flow systems without faults. Then, faults' modelling is considered using a structural method without having to rebuild the model to new. A translation method is given in hierarchical way, it gives a hybrid automata (HA) from an elementary HPN. This translation preserves the behavioural semantics (timed bisimilarity), and reflects the temporal behaviour by giving semantics for each model in terms of timed transition systems. Thus, advantages of the power modelling of HPNs and the analysis ability of HA are taken. A simple example is used to illustrate the ideas.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Moschas, Fanis; Stiros, Stathis
2017-04-01
Finite fault models (FFM) are presented for the two main shocks of the 2014 Cephalonia (Ionian Sea, Greece) seismic sequence (M 6.0) which produced extreme peak ground accelerations ( 0.7g) in the west edge of the Aegean Arc, an area in which the poor coverage by seismological and GPS/INSAR data makes FFM a real challenge. Modeling was based on co-seismic GPS data and on the recently introduced TOPological INVersion algorithm. The latter is a novel uniform grid search-based technique in n-dimensional spaces, is based on the concept of stochastic variables and which can identify multiple unconstrained ("free") solutions in a specified search space. Derived FFMs for the 2014 earthquakes correspond to an essentially strike slip fault and of part of a shallow thrust, the surface projection of both of which run roughly along the west coast of Cephalonia. Both faults correlate with pre-existing faults. The 2014 faults, in combination with the faults of the 2003 and 2015 Leucas earthquakes farther NE, form a string of oblique slip, partly overlapping fault segments with variable geometric and kinematic characteristics along the NW edge of the Aegean Arc. This composite fault, usually regarded as the Cephalonia Transform Fault, accommodates shear along this part of the Arc. Because of the highly fragmented crust, dominated by major thrusts in this area, fault activity is associated with 20km long segments and magnitude 6.0-6.5 earthquakes recurring in intervals of a few seconds to 10 years.
Modelling Fault Zone Evolution: Implications for fluid flow.
NASA Astrophysics Data System (ADS)
Moir, H.; Lunn, R. J.; Shipton, Z. K.
2009-04-01
Flow simulation models are of major interest to many industries including hydrocarbon, nuclear waste, sequestering of carbon dioxide and mining. One of the major uncertainties in these models is in predicting the permeability of faults, principally in the detailed structure of the fault zone. Studying the detailed structure of a fault zone is difficult because of the inaccessible nature of sub-surface faults and also because of their highly complex nature; fault zones show a high degree of spatial and temporal heterogeneity i.e. the properties of the fault change as you move along the fault, they also change with time. It is well understood that faults influence fluid flow characteristics. They may act as a conduit or a barrier or even as both by blocking flow across the fault while promoting flow along it. Controls on fault hydraulic properties include cementation, stress field orientation, fault zone components and fault zone geometry. Within brittle rocks, such as granite, fracture networks are limited but provide the dominant pathway for flow within this rock type. Research at the EU's Soultz-sous-Forệt Hot Dry Rock test site [Evans et al., 2005] showed that 95% of flow into the borehole was associated with a single fault zone at 3490m depth, and that 10 open fractures account for the majority of flow within the zone. These data underline the critical role of faults in deep flow systems and the importance of achieving a predictive understanding of fault hydraulic properties. To improve estimates of fault zone permeability, it is important to understand the underlying hydro-mechanical processes of fault zone formation. In this research, we explore the spatial and temporal evolution of fault zones in brittle rock through development and application of a 2D hydro-mechanical finite element model, MOPEDZ. The authors have previously presented numerical simulations of the development of fault linkage structures from two or three pre-existing joints, the results of which compare well to features observed in mapped exposures. For these simple simulations from a small number of pre-existing joints the fault zone evolves in a predictable way: fault linkage is governed by three key factors: Stress ratio of s1 (maximum compressive stress) to s3(minimum compressive stress), original geometry of the pre-existing structures (contractional vs. dilational geometries) and the orientation of the principle stress direction (σ1) to the pre-existing structures. In this paper we present numerical simulations of the temporal and spatial evolution of fault linkage structures from many pre-existing joints. The initial location, size and orientations of these joints are based on field observations of cooling joints in granite from the Sierra Nevada. We show that the constantly evolving geometry and local stress field perturbations contribute significantly to fault zone evolution. The location and orientations of linkage structures previously predicted by the simple simulations are consistent with the predicted geometries in the more complex fault zones, however, the exact location at which individual structures form is not easily predicted. Markedly different fault zone geometries are predicted when the pre-existing joints are rotated with respect to the maximum compressive stress. In particular, fault surfaces range from evolving smooth linear structures to producing complex ‘stepped' fault zone geometries. These geometries have a significant effect on simulations of along and across-fault flow.
Neotectonics of Asia: Thin-shell finite-element models with faults
NASA Technical Reports Server (NTRS)
Kong, Xianghong; Bird, Peter
1994-01-01
As India pushed into and beneath the south margin of Asia in Cenozoic time, it added a great volume of crust, which may have been (1) emplaced locally beneath Tibet, (2) distributed as regional crustal thickening of Asia, (3) converted to mantle eclogite by high-pressure metamorphism, or (4) extruded eastward to increase the area of Asia. The amount of eastward extrusion is especially controversial: plane-stress computer models of finite strain in a continuum lithosphere show minimal escape, while laboratory and theoretical plane-strain models of finite strain in a faulted lithosphere show escape as the dominant mode. We suggest computing the present (or neo)tectonics by use of the known fault network and available data on fault activity, geodesy, and stress to select the best model. We apply a new thin-shell method which can represent a faulted lithosphere of realistic rheology on a sphere, and provided predictions of present velocities, fault slip rates, and stresses for various trial rheologies and boundary conditions. To minimize artificial boundaries, the models include all of Asia east of 40 deg E and span 100 deg on the globe. The primary unknowns are the friction coefficient of faults within Asia and the amounts of shear traction applied to Asia in the Himalayan and oceanic subduction zones at its margins. Data on Quaternary fault activity prove to be most useful in rating the models. Best results are obtained with a very low fault friction of 0.085. This major heterogeneity shows that unfaulted continum models cannot be expected to give accurate simulations of the orogeny. But, even with such weak faults, only a fraction of the internal deformation is expressed as fault slip; this means that rigid microplate models cannot represent the kinematics either. A universal feature of the better models is that eastern China and southeast Asia flow rapidly eastward with respect to Siberia. The rate of escape is very sensitive to the level of shear traction in the Pacific subduction zones, which is below 6 MPa. Because this flow occurs across a wide range of latitudes, the net eastward escape is greater than the rate of crustal addition in the Himalaya. The crustal budget is balanced by extension and thinning, primarily within the Tibetan plateau and the Baikal rift. The low level of deviation stresses in the best models suggests that topographic stress plays a major role in the orogeny; thus, we have to expect that different topography in the past may have been linked with fundamentally different modes of continental collision.
NASA Astrophysics Data System (ADS)
Attal, M.; Tucker, G.; Whittaker, A.; Cowie, P.; Roberts, G.
2005-12-01
River systems constitute some of the most efficient agents that shape terrestrial landscapes. Fluvial incision rates govern landscape evolution but, due to the variety of processed involved and the difficulty of quantifying them in the field, there is no "universal theory" describing the way rivers incise into bedrock. The last decades have seen the birth of numerous fluvial incision laws associated with models that assign different roles to hydrodynamic variables and to sediments. In order to discriminate between models and constrain their parameters, the transient response of natural river systems to a disturbance (tectonic or climatic) can be used. Indeed, the different models predict different kinds of transient response whereas most models predict a similar power law relationship between slope and drainage area at equilibrium. To this end, a coupled field - modeling study is in progress. The field area consists of the Central Apennines that are subject to active faulting associated with a regional extensional regime. Fault initiation occurred 3 My ago, associated with throw rates of 0.3 +/- 0.2 mm/yr. Due to fault interaction and linkage, the throw rate on the faults located near the center of the fault system increased dramatically 0.7 My ago (up to 2 mm/yr), whereas slip rates on distal faults either decayed or remained approximately constant. The present study uses the landscape evolution model, CHILD, to examine the behavior of rivers draining across these active faults. Distal and central faults are considered in order to track the effects of the fault acceleration on the development of the fluvial network. River characteristics have been measured in the field (e.g. channel width, slope, sediment grain size) and extracted from a 20m DEM (e.g. channel profile, drainage area). We use CHILD to test the ability of alternative incision laws to reproduce observed topography under known tectonic forcing. For each of the fluvial incision models, a Monte-Carlo simulation has been performed, allowing the exploration of a wide range of values for the different parameters relative to tectonic, climate, sediment characteristics, and channel geometry. Observed profiles are consistent with a dominantly wave-like, as opposed to diffusive, transient response to accelerated fault motion. The ability of the different models to reproduce more or less accurately the catchment characteristics, in particular the specific profiles exhibited by the rivers, are discussed in light of our first results.
Interseismic Strain Accumulation Across Metropolitan Los Angeles: Puente Hills Thrust
NASA Astrophysics Data System (ADS)
Argus, D.; Liu, Z.; Heflin, M. B.; Moore, A. W.; Owen, S. E.; Lundgren, P.; Drake, V. G.; Rodriguez, I. I.
2012-12-01
Twelve years of observation of the Southern California Integrated GPS Network (SCIGN) are tightly constraining the distribution of shortening across metropolitan Los Angeles, providing information on strain accumulation across blind thrust faults. Synthetic Aperture Radar Interferometry (InSAR) and water well records are allowing the effects of water and oil management to be distinguished. The Mojave segment of the San Andreas fault is at a 25° angle to Pacific-North America plate motion. GPS shows that NNE-SSW shortening due to this big restraining bend is fastest not immediately south of the San Andreas fault across the San Gabriel mountains, but rather 50 km south of the fault in northern metropolitan Los Angeles. The GPS results we quote next are for a NNE profile through downtown Los Angeles. Just 2 mm/yr of shortening is being taken up across the San Gabriel mountains, 40 km wide (0.05 micro strain/yr); 4 mm/yr of shortening is being taken up between the Sierra Madre fault, at the southern front of the San Gabriel mountains, and South Central Los Angeles, also 40 km wide (0.10 micro strain/yr). We find shortening to be more evenly distributed across metropolitan Los Angeles than we found before [Argus et al. 2005], though within the 95% confidence limits. An elastic models of interseismic strain accumulation is fit to the GPS observations using the Back Slip model of Savage [1983]. Rheology differences between crystalline basement and sedimentary basin rocks are incorporated using the EDGRN/EDCMP algorithm of Wang et al. [2003]. We attempt to place the Back Slip model into the context of the Elastic Subducting Plate Model of Kanda and Simons [2010]. We find, along the NNE profile through downtown, that: (1) The deep Sierra Madre Thrust cannot be slipping faster than 2 mm/yr, and (2) The Puente Hills Thrust and nearby thrust faults (such as the upper Elysian Park Thrust) are slipping at 9 ±2 mm/yr beneath a locking depth of 12 ±5 km (95% confidence limits). Incorporating sedimentary basin rock either reduces the slip rate by 10 per cent or increases the locking rate by 20 per cent. The 9 mm/yr rate for the Puente Hills Thrust and nearby faults exceeds the cumulative 3-5 mm/yr rate estimated using paleoseismology along the Puente Hills Thrust (1.2-1.6 mm/yr, Dolan et al. 2003), upper Elysian Park Thrust (0.6-2.2 mm/yr, Oskin et al. 2000), and western Compton Thrust (1.2 mm/yr, Leon et al. 2009], though all the paleoseismic estimates are minimums. We infer that M 7 earthquakes in northern metropolitan Los Angeles may occur more frequently that previously thought.