Sample records for earth fault management

  1. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    NASA Astrophysics Data System (ADS)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

  2. Flight elements: Fault detection and fault management

    NASA Technical Reports Server (NTRS)

    Lum, H.; Patterson-Hine, A.; Edge, J. T.; Lawler, D.

    1990-01-01

    Fault management for an intelligent computational system must be developed using a top down integrated engineering approach. An approach proposed includes integrating the overall environment involving sensors and their associated data; design knowledge capture; operations; fault detection, identification, and reconfiguration; testability; causal models including digraph matrix analysis; and overall performance impacts on the hardware and software architecture. Implementation of the concept to achieve a real time intelligent fault detection and management system will be accomplished via the implementation of several objectives, which are: Development of fault tolerant/FDIR requirement and specification from a systems level which will carry through from conceptual design through implementation and mission operations; Implementation of monitoring, diagnosis, and reconfiguration at all system levels providing fault isolation and system integration; Optimize system operations to manage degraded system performance through system integration; and Lower development and operations costs through the implementation of an intelligent real time fault detection and fault management system and an information management system.

  3. Fault Management Metrics

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  4. Fault management for data systems

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann

    1993-01-01

    Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.

  5. Managing Fault Management Development

    NASA Technical Reports Server (NTRS)

    McDougal, John M.

    2010-01-01

    As the complexity of space missions grows, development of Fault Management (FM) capabilities is an increasingly common driver for significant cost overruns late in the development cycle. FM issues and the resulting cost overruns are rarely caused by a lack of technology, but rather by a lack of planning and emphasis by project management. A recent NASA FM Workshop brought together FM practitioners from a broad spectrum of institutions, mission types, and functional roles to identify the drivers underlying FM overruns and recommend solutions. They identified a number of areas in which increased program and project management focus can be used to control FM development cost growth. These include up-front planning for FM as a distinct engineering discipline; managing different, conflicting, and changing institutional goals and risk postures; ensuring the necessary resources for a disciplined, coordinated approach to end-to-end fault management engineering; and monitoring FM coordination across all mission systems.

  6. Health management and controls for Earth-to-orbit propulsion systems

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.

    1995-03-01

    Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.

  7. Managing Space System Faults: Coalescing NASA's Views

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian; Fesq, Lorraine

    2012-01-01

    Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

  8. A System for Fault Management and Fault Consequences Analysis for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; Spirkovska, Liljana; Baskaran, Vijaykumar; Aaseng, Gordon; McCann, Robert S.; Ossenfort, John; Smith, Irene; Iverson, David L.; Schwabacher, Mark

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy

  9. NASA Spacecraft Fault Management Workshop Results

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn; McDougal, John; Barley, Bryan; Fesq, Lorraine; Stephens, Karen

    2010-01-01

    Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

  10. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    NASA Technical Reports Server (NTRS)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  11. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  12. Fault management and systems knowledge

    DOT National Transportation Integrated Search

    2016-12-01

    Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

  13. Automated fault-management in a simulated spaceflight micro-world

    NASA Technical Reports Server (NTRS)

    Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja

    2002-01-01

    BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.

  14. A System for Fault Management for NASA's Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano P.; Spirkovska, Liljana; Aaseng, Gordon B.; Mccann, Robert S.; Baskaran, Vijayakumar; Ossenfort, John P.; Smith, Irene Skupniewicz; Iverson, David L.; Schwabacher, Mark A.

    2013-01-01

    NASA's exploration program envisions the utilization of a Deep Space Habitat (DSH) for human exploration of the space environment in the vicinity of Mars and/or asteroids. Communication latencies with ground control of as long as 20+ minutes make it imperative that DSH operations be highly autonomous, as any telemetry-based detection of a systems problem on Earth could well occur too late to assist the crew with the problem. A DSH-based development program has been initiated to develop and test the automation technologies necessary to support highly autonomous DSH operations. One such technology is a fault management tool to support performance monitoring of vehicle systems operations and to assist with real-time decision making in connection with operational anomalies and failures. Toward that end, we are developing Advanced Caution and Warning System (ACAWS), a tool that combines dynamic and interactive graphical representations of spacecraft systems, systems modeling, automated diagnostic analysis and root cause identification, system and mission impact assessment, and mitigation procedure identification to help spacecraft operators (both flight controllers and crew) understand and respond to anomalies more effectively. In this paper, we describe four major architecture elements of ACAWS: Anomaly Detection, Fault Isolation, System Effects Analysis, and Graphic User Interface (GUI), and how these elements work in concert with each other and with other tools to provide fault management support to both the controllers and crew. We then describe recent evaluations and tests of ACAWS on the DSH testbed. The results of these tests support the feasibility and strength of our approach to failure management automation and enhanced operational autonomy.

  15. Developing a Fault Management Guidebook for Nasa's Deep Space Robotic Missions

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Jacome, Raquel Weitl

    2015-01-01

    NASA designs and builds systems that achieve incredibly ambitious goals, as evidenced by the Curiosity rover traversing on Mars, the highly complex International Space Station orbiting our Earth, and the compelling plans for capturing, retrieving and redirecting an asteroid into a lunar orbit to create a nearby a target to be investigated by astronauts. In order to accomplish these feats, the missions must be imbued with sufficient knowledge and capability not only to realize the goals, but also to identify and respond to off-nominal conditions. Fault Management (FM) is the discipline of establishing how a system will respond to preserve its ability to function even in the presence of faults. In 2012, NASA released a draft FM Handbook in an attempt to coalesce the field by establishing a unified terminology and a common process for designing FM mechanisms. However, FM approaches are very diverse across NASA, especially between the different mission types such as Earth orbiters, launch vehicles, deep space robotic vehicles and human spaceflight missions, and the authors were challenged to capture and represent all of these views. The authors recognized that a necessary precursor step is for each sub-community to codify its FM policies, practices and approaches in individual, focused guidebooks. Then, the sub-communities can look across NASA to better understand the different ways off-nominal conditions are addressed, and to seek commonality or at least an understanding of the multitude of FM approaches. This paper describes the development of the "Deep Space Robotic Fault Management Guidebook," which is intended to be the first of NASA's FM guidebooks. Its purpose is to be a field-guide for FM practitioners working on deep space robotic missions, as well as a planning tool for project managers. Publication of this Deep Space Robotic FM Guidebook is expected in early 2015. The guidebook will be posted on NASA's Engineering Network on the FM Community of Practice

  16. The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.

    NASA Astrophysics Data System (ADS)

    Thompson, T. B.; Meade, B. J.

    2017-12-01

    While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.

  17. Fault Management Design Strategies

    NASA Technical Reports Server (NTRS)

    Day, John C.; Johnson, Stephen B.

    2014-01-01

    Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

  18. Orion GN&C Fault Management System Verification: Scope And Methodology

    NASA Technical Reports Server (NTRS)

    Brown, Denise; Weiler, David; Flanary, Ronald

    2016-01-01

    In order to ensure long-term ability to meet mission goals and to provide for the safety of the public, ground personnel, and any crew members, nearly all spacecraft include a fault management (FM) system. For a manned vehicle such as Orion, the safety of the crew is of paramount importance. The goal of the Orion Guidance, Navigation and Control (GN&C) fault management system is to detect, isolate, and respond to faults before they can result in harm to the human crew or loss of the spacecraft. Verification of fault management/fault protection capability is challenging due to the large number of possible faults in a complex spacecraft, the inherent unpredictability of faults, the complexity of interactions among the various spacecraft components, and the inability to easily quantify human reactions to failure scenarios. The Orion GN&C Fault Detection, Isolation, and Recovery (FDIR) team has developed a methodology for bounding the scope of FM system verification while ensuring sufficient coverage of the failure space and providing high confidence that the fault management system meets all safety requirements. The methodology utilizes a swarm search algorithm to identify failure cases that can result in catastrophic loss of the crew or the vehicle and rare event sequential Monte Carlo to verify safety and FDIR performance requirements.

  19. Absence of earthquake correlation with Earth tides: An indication of high preseismic fault stress rate

    USGS Publications Warehouse

    Vidale, J.E.; Agnew, D.C.; Johnston, M.J.S.; Oppenheimer, D.H.

    1998-01-01

    Because the rate of stress change from the Earth tides exceeds that from tectonic stress accumulation, tidal triggering of earthquakes would be expected if the final hours of loading of the fault were at the tectonic rate and if rupture began soon after the achievement of a critical stress level. We analyze the tidal stresses and stress rates on the fault planes and at the times of 13,042 earthquakes which are so close to the San Andreas and Calaveras faults in California that we may take the fault plane to be known. We find that the stresses and stress rates from Earth tides at the times of earthquakes are distributed in the same way as tidal stresses and stress rates at random times. While the rate of earthquakes when the tidal stress promotes failure is 2% higher than when the stress does not, this difference in rate is not statistically significant. This lack of tidal triggering implies that preseismic stress rates in the nucleation zones of earthquakes are at least 0.15 bar/h just preceding seismic failure, much above the long-term tectonic stress rate of 10-4 bar/h.

  20. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  1. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  2. A New Kinematic Model for Polymodal Faulting: Implications for Fault Connectivity

    NASA Astrophysics Data System (ADS)

    Healy, D.; Rizzo, R. E.

    2015-12-01

    Conjugate, or bimodal, fault patterns dominate the geological literature on shear failure. Based on Anderson's (1905) application of the Mohr-Coulomb failure criterion, these patterns have been interpreted from all tectonic regimes, including normal, strike-slip and thrust (reverse) faulting. However, a fundamental limitation of the Mohr-Coulomb failure criterion - and others that assume faults form parallel to the intermediate principal stress - is that only plane strain can result from slip on the conjugate faults. However, deformation in the Earth is widely accepted as being three-dimensional, with truly triaxial stresses and strains. Polymodal faulting, with three or more sets of faults forming and slipping simultaneously, can generate three-dimensional strains from truly triaxial stresses. Laboratory experiments and outcrop studies have verified the occurrence of the polymodal fault patterns in nature. The connectivity of polymodal fault networks differs significantly from conjugate fault networks, and this presents challenges to our understanding of faulting and an opportunity to improve our understanding of seismic hazards and fluid flow. Polymodal fault patterns will, in general, have more connected nodes in 2D (and more branch lines in 3D) than comparable conjugate (bimodal) patterns. The anisotropy of permeability is therefore expected to be very different in rocks with polymodal fault patterns in comparison to conjugate fault patterns, and this has implications for the development of hydrocarbon reservoirs, the genesis of ore deposits and the management of aquifers. In this contribution, I assess the published evidence and models for polymodal faulting before presenting a novel kinematic model for general triaxial strain in the brittle field.

  3. Fault Management Guiding Principles

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  4. Analytical Approaches to Guide SLS Fault Management (FM) Development

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.

    2012-01-01

    Extensive analysis is needed to determine the right set of FM capabilities to provide the most coverage without significantly increasing the cost, reliability (FP/FN), and complexity of the overall vehicle systems. Strong collaboration with the stakeholders is required to support the determination of the best triggers and response options. The SLS Fault Management process has been documented in the Space Launch System Program (SLSP) Fault Management Plan (SLS-PLAN-085).

  5. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  6. Implementation of Integrated System Fault Management Capability

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark

    2008-01-01

    Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.

  7. Response of faults to climate-driven changes in ice and water volumes on Earth's surface.

    PubMed

    Hampel, Andrea; Hetzel, Ralf; Maniatis, Georgios

    2010-05-28

    Numerical models including one or more faults in a rheologically stratified lithosphere show that climate-induced variations in ice and water volumes on Earth's surface considerably affect the slip evolution of both thrust and normal faults. In general, the slip rate and hence the seismicity of a fault decreases during loading and increases during unloading. Here, we present several case studies to show that a postglacial slip rate increase occurred on faults worldwide in regions where ice caps and lakes decayed at the end of the last glaciation. Of note is that the postglacial amplification of seismicity was not restricted to the areas beneath the large Laurentide and Fennoscandian ice sheets but also occurred in regions affected by smaller ice caps or lakes, e.g. the Basin-and-Range Province. Our results do not only have important consequences for the interpretation of palaeoseismological records from faults in these regions but also for the evaluation of the future seismicity in regions currently affected by deglaciation like Greenland and Antarctica: shrinkage of the modern ice sheets owing to global warming may ultimately lead to an increase in earthquake frequency in these regions.

  8. The San Andreas Fault and a Strike-slip Fault on Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    materials, but may be filled in mostly by sedimentary and erosional material deposited from above. Comparisons between faults on Europa and Earth may generate ideas useful in the study of terrestrial faulting.

    One theory is that fault motion on Europa is induced by the pull of variable daily tides generated by Jupiter's gravitational tug on Europa. The tidal tension opens the fault; subsequent tidal stress causes it to move lengthwise in one direction. Then the tidal forces close the fault up again. This prevents the area from moving back to its original position. If it moves forward with the next daily tidal cycle, the result is a steady accumulation of these lengthwise offset motions.

    Unlike Europa, here on Earth, large strike-slip faults such as the San Andreas are set in motion not by tidal pull, but by plate tectonic forces from the planet's mantle.

    North is to the top of the picture. The Earth picture (left) shows a LandSat Thematic Mapper image acquired in the infrared (1.55 to 1.75 micrometers) by LandSat5 on Friday, October 20th 1989 at 10:21 am. The original resolution was 28.5 meters per picture element.

    The Europa picture (right)is centered at 66 degrees south latitude and 195 degrees west longitude. The highest resolution frames, obtained at 40 meters per picture element with a spacecraft range of less than 4200 kilometers (2600 miles), are set in the context of lower resolution regional frames obtained at 200 meters per picture element and a range of 22,000 kilometers (13,600 miles). The images were taken on September 26, 1998 by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images

  9. Influence of slip-surface geometry on earth-flow deformation, Montaguto earth flow, southern Italy

    USGS Publications Warehouse

    Guerriero, L.; Coe, Jeffrey A.; Revellio, P.; Grelle, G.; Pinto, F.; Guadagno, F.

    2016-01-01

    We investigated relations between slip-surface geometry and deformational structures and hydrologic features at the Montaguto earth flow in southern Italy between 1954 and 2010. We used 25 boreholes, 15 static cone-penetration tests, and 22 shallow-seismic profiles to define the geometry of basal- and lateral-slip surfaces; and 9 multitemporal maps to quantify the spatial and temporal distribution of normal faults, thrust faults, back-tilted surfaces, strike-slip faults, flank ridges, folds, ponds, and springs. We infer that the slip surface is a repeating series of steeply sloping surfaces (risers) and gently sloping surfaces (treads). Stretching of earth-flow material created normal faults at risers, and shortening of earth-flow material created thrust faults, back-tilted surfaces, and ponds at treads. Individual pairs of risers and treads formed quasi-discrete kinematic zones within the earth flow that operated in unison to transmit pulses of sediment along the length of the flow. The locations of strike-slip faults, flank ridges, and folds were not controlled by basal-slip surface topography but were instead dependent on earth-flow volume and lateral changes in the direction of the earth-flow travel path. The earth-flow travel path was strongly influenced by inactive earth-flow deposits and pre-earth-flow drainages whose positions were determined by tectonic structures. The implications of our results that may be applicable to other earth flows are that structures with strikes normal to the direction of earth-flow motion (e.g., normal faults and thrust faults) can be used as a guide to the geometry of basal-slip surfaces, but that depths to the slip surface (i.e., the thickness of an earth flow) will vary as sediment pulses are transmitted through a flow.

  10. V&V of Fault Management: Challenges and Successes

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Costello, Ken; Ohi, Don; Lu, Tiffany; Newhouse, Marilyn

    2013-01-01

    This paper describes the results of a special breakout session of the NASA Independent Verification and Validation (IV&V) Workshop held in the fall of 2012 entitled "V&V of Fault Management: Challenges and Successes." The NASA IV&V Program is in a unique position to interact with projects across all of the NASA development domains. Using this unique opportunity, the IV&V program convened a breakout session to enable IV&V teams to share their challenges and successes with respect to the V&V of Fault Management (FM) architectures and software. The presentations and discussions provided practical examples of pitfalls encountered while performing V&V of FM including the lack of consistent designs for implementing faults monitors and the fact that FM information is not centralized but scattered among many diverse project artifacts. The discussions also solidified the need for an early commitment to developing FM in parallel with the spacecraft systems as well as clearly defining FM terminology within a project.

  11. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management

    PubMed Central

    Halicioglu, Kerem; Ozener, Haluk

    2008-01-01

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE–SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters – standard strike-slip model of dislocation theory in an elastic half-space – is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems. PMID:27873783

  12. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management.

    PubMed

    Halicioglu, Kerem; Ozener, Haluk

    2008-08-19

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE-SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters - standard strike-slip model of dislocation theory in an elastic half-space - is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems.

  13. Development of the self-learning machine for creating models of microprocessor of single-phase earth fault protection devices in networks with isolated neutral voltage above 1000 V

    NASA Astrophysics Data System (ADS)

    Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.

    2018-02-01

    The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.

  14. Results from the NASA Spacecraft Fault Management Workshop: Cost Drivers for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Newhouse, Marilyn E.; McDougal, John; Barley, Bryan; Stephens Karen; Fesq, Lorraine M.

    2010-01-01

    Fault Management, the detection of and response to in-flight anomalies, is a critical aspect of deep-space missions. Fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for five missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that four out of the five missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, and academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and tools that have not kept pace with the increasing complexity of mission requirements and spacecraft systems. This paper summarizes the

  15. Fault Management Techniques in Human Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    O'Hagan, Brian; Crocker, Alan

    2006-01-01

    This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

  16. Concurrent development of fault management hardware and software in the SSM/PMAD. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Freeman, Kenneth A.; Walsh, Rick; Weeks, David J.

    1988-01-01

    Space Station issues in fault management are discussed. The system background is described with attention given to design guidelines and power hardware. A contractually developed fault management system, FRAMES, is integrated with the energy management functions, the control switchgear, and the scheduling and operations management functions. The constraints that shaped the FRAMES system and its implementation are considered.

  17. Fault recovery characteristics of the fault tolerant multi-processor

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1990-01-01

    The fault handling performance of the fault tolerant multiprocessor (FTMP) was investigated. Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles byzantine or lying faults. It is pointed out that these weak areas in the FTMP's design increase the probability that, for any hardware fault, a good LRU (line replaceable unit) is mistakenly disabled by the fault management software. It is concluded that fault injection can help detect and analyze the behavior of a system in the ultra-reliable regime. Although fault injection testing cannot be exhaustive, it has been demonstrated that it provides a unique capability to unmask problems and to characterize the behavior of a fault-tolerant system.

  18. Fault Management Practice: A Roadmap for Improvement

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  19. Abnormal fault-recovery characteristics of the fault-tolerant multiprocessor uncovered using a new fault-injection methodology

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1991-01-01

    An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.

  20. Optimal Management of Redundant Control Authority for Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.

  1. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  2. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  3. Health management and controls for earth to orbit propulsion systems

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.

    1992-01-01

    Fault detection and isolation for advanced rocket engine controllers are discussed focusing on advanced sensing systems and software which significantly improve component failure detection for engine safety and health management. Aerojet's Space Transportation Main Engine controller for the National Launch System is the state of the art in fault tolerant engine avionics. Health management systems provide high levels of automated fault coverage and significantly improve vehicle delivered reliability and lower preflight operations costs. Key technologies, including the sensor data validation algorithms and flight capable spectrometers, have been demonstrated in ground applications and are found to be suitable for bridging programs into flight applications.

  4. Operations management system advanced automation: Fault detection isolation and recovery prototyping

    NASA Technical Reports Server (NTRS)

    Hanson, Matt

    1990-01-01

    The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.

  5. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space

  6. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IVV) Program, with Software Assurance Research Program support, extracted FM architectures across the IVV portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IVV projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management.

  7. Current Fault Management Trends in NASA's Planetary Spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.

    2009-01-01

    The key product of this three-day workshop is a NASA White Paper that documents lessons learned from previous missions, recommended best practices, and future opportunities for investments in the fault management domain. This paper summarizes the findings and recommendations that are captured in the White Paper.

  8. Breaking down barriers in cooperative fault management: Temporal and functional information displays

    NASA Technical Reports Server (NTRS)

    Potter, Scott S.; Woods, David D.

    1994-01-01

    At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks.

  9. Operator Performance Evaluation of Fault Management Interfaces for Next-Generation Spacecraft

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; Beutter, Brent; McCann, Robert S.; Spirkovska, Lilly; Renema, Fritz

    2008-01-01

    In the cockpit of the NASA's next generation of spacecraft, most of vehicle commanding will be carried out via electronic interfaces instead of hard cockpit switches. Checklists will be also displayed and completed on electronic procedure viewers rather than from paper. Transitioning to electronic cockpit interfaces opens up opportunities for more automated assistance, including automated root-cause diagnosis capability. The paper reports an empirical study evaluating two potential concepts for fault management interfaces incorporating two different levels of automation. The operator performance benefits produced by automation were assessed. Also, some design recommendations for spacecraft fault management interfaces are discussed.

  10. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    USGS Publications Warehouse

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  11. Management Approach for Earth Venture Instrument

    NASA Technical Reports Server (NTRS)

    Hope, Diane L.; Dutta, Sanghamitra

    2013-01-01

    The Earth Venture Instrument (EVI) element of the Earth Venture Program calls for developing instruments for participation on a NASA-arranged spaceflight mission of opportunity to conduct innovative, integrated, hypothesis or scientific question-driven approaches to pressing Earth system science issues. This paper discusses the EVI element and the management approach being used to manage both an instrument development activity as well as the host accommodations activity. In particular the focus will be on the approach being used for the first EVI (EVI-1) selected instrument, Tropospheric Emissions: Monitoring of Pollution (TEMPO), which will be hosted on a commercial GEO satellite and some of the challenges encountered to date and corresponding mitigations that are associated with the management structure for the TEMPO Mission and the architecture of EVI.

  12. Perspective View, San Andreas Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault in an image created with data from NASA's shuttle Radar Topography Mission (SRTM), which will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, California, about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. This area is at the junction of two large mountain ranges, the San Gabriel Mountains on the left and the Tehachapi Mountains on the right. Quail Lake Reservoir sits in the topographic depression created by past movement along the fault. Interstate 5 is the prominent linear feature starting at the left edge of the image and continuing into the fault zone, passing eventually over Tejon Pass into the Central Valley, visible at the upper left.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994

  13. Fleet-Wide Prognostic and Health Management Suite: Asset Fault Signature Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: (1) Diagnostic Advisor, (2) Asset Fault Signature (AFS) Database, (3) Remaining Useful Life Advisor, and (4) Remaining Useful Life Database. The paper focuses on the AFS Database of the FW-PHM Suite, which is used to catalog asset fault signatures. A fault signature is a structured representation ofmore » the information that an expert would use to first detect and then verify the occurrence of a specific type of fault. The fault signatures developed to assess the health status of generator step-up transformers are described in the paper. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  14. Evaluating Fault Management Operations Concepts for Next-Generation Spacecraft: What Eye Movements Tell Us

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Ravinder, Ujwala; McCann, Robert S.; Beutter, Brent; Spirkovska, Lily

    2009-01-01

    Performance enhancements associated with selected forms of automation were quantified in a recent human-in-the-loop evaluation of two candidate operational concepts for fault management on next-generation spacecraft. The baseline concept, called Elsie, featured a full-suite of "soft" fault management interfaces. However, operators were forced to diagnose malfunctions with minimal assistance from the standalone caution and warning system. The other concept, called Besi, incorporated a more capable C&W system with an automated fault diagnosis capability. Results from analyses of participants' eye movements indicate that the greatest empirical benefit of the automation stemmed from eliminating the need for text processing on cluttered, text-rich displays.

  15. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most missions is system complexity due to a need to establish a multi-dimensional structure across hardware, software and spacecraft operations. FM is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. Generally, FM architecture, implementation, and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (V&V) is challenging. A breakout session at the 2012 NASA Independent Verification & Validation (IV&V) Annual Workshop titled "V&V of Fault Management: Challenges and Successes" exposed this issue in terms of V&V for a representative set of architectures. NASA's Software Assurance Research Program (SARP) has provided funds to NASA IV&V to extend the work performed at the Workshop session in partnership with NASA's Jet Propulsion Laboratory (JPL). NASA IV&V will extract FM architectures across the IV&V portfolio and evaluate the data set, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This SARP initiative focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures and associated V&V/IV&V techniques provides a data set that can enable improved assurance that a system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the space community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research.

  16. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the

  17. Reply to comments by Ahmad et al. on: Shah, A. A., 2013. Earthquake geology of Kashmir Basin and its implications for future large earthquakes International Journal of Earth Sciences DOI:10.1007/s00531-013-0874-8 and on Shah, A. A., 2015. Kashmir Basin Fault and its tectonic significance in NW Himalaya, Jammu and Kashmir, India, International Journal of Earth Sciences DOI:10.1007/s00531-015-1183-1

    NASA Astrophysics Data System (ADS)

    Shah, A. A.

    2016-03-01

    Shah (Int J Earth Sci 102:1957-1966, 2013) mapped major unknown faults and fault segments in Kashmir basin using geomorphological techniques. The major trace of out-of-sequence thrust fault was named as Kashmir basin fault (KBF) because it runs through the middle of Kashmir basin, and the active movement on it has backtilted and uplifted most of the basin. Ahmad et al. (Int J Earth Sci, 2015) have disputed the existence of KBF and maintained that faults identified by Shah (Int J Earth Sci 102:1957-1966, 2013) were already mapped as inferred faults by earlier workers. The early works, however, show a major normal fault, or a minor out-of-sequence reverse fault, and none have shown a major thrust fault.

  18. Fault Management Technology Maturation for NASA's Constellation Program

    NASA Technical Reports Server (NTRS)

    Waterman, Robert D.

    2010-01-01

    This slide presentation reviews the maturation of fault management technology in preparation for the Constellation Program. There is a review of the Space Shuttle Main Engine (SSME) and a discussion of a couple of incidents with the shuttle main engine and tanking that indicated the necessity for predictive maintenance. Included is a review of the planned Ares I-X Ground Diagnostic Prototype (GDP) and further information about detection and isolation of faults using Testability Engineering and Maintenance System (TEAMS). Another system that being readied for use that detects anomalies, the Inductive Monitoring System (IMS). The IMS automatically learns how the system behaves and alerts operations it the current behavior is anomalous. The comparison of STS-83 and STS-107 (i.e., the Columbia accident) is shown as an example of the anomaly detection capabilities.

  19. Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda; Whitman, Gerek

    2016-01-01

    Research into complexities of software systems Fault Management (FM) and how architectural design decisions affect safety, preservation of assets, and maintenance of desired system functionality has coalesced into a technical reference (TR) suite that advances the provision of safety and mission assurance. The NASA Independent Verification and Validation (IV&V) Program, with Software Assurance Research Program support, extracted FM architectures across the IV&V portfolio to evaluate robustness, assess visibility for validation and test, and define software assurance methods applied to the architectures and designs. This investigation spanned IV&V projects with seven different primary developers, a wide range of sizes and complexities, and encompassed Deep Space Robotic, Human Spaceflight, and Earth Orbiter mission FM architectures. The initiative continues with an expansion of the TR suite to include Launch Vehicles, adding the benefit of investigating differences intrinsic to model-based FM architectures and insight into complexities of FM within an Agile software development environment, in order to improve awareness of how nontraditional processes affect FM architectural design and system health management. The identification of particular FM architectures, visibility, and associated IV&V techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. Additionally, the role FM has with regard to strengthened security requirements, with potential to advance overall asset protection of flight software systems, is being addressed with the development of an adverse conditions database encompassing flight software vulnerabilities. Capitalizing on the established framework, this TR suite provides assurance capability for a variety of FM architectures and varied development approaches. Research results are being disseminated across NASA, other agencies, and the

  20. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns: (1) Often faults aren't addressed until nominal spacecraft design is fairly stable. (2) Design relegated to after-the-fact patchwork, Band-Aid approach. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition. Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions. New approaches could avoid many current pitfalls (3a) New FM architectures, including model-based approach integrated with NASA's MBSE (Model-Based System Engineering) efforts (3b) NASA's Office of the Chief Technologist: FM identified in seven of NASA's 14 Space Technology Roadmaps. Opportunity to coalesce and establish thrust area to progressively develop new FM techniques. FM Handbook will help ensure that future missions do not encounter same FM-related problems as previous missions. Version 1 of the FM Handbook is a good start: (1) Still need Version 2 Agency-wide FM Handbook to expand Handbook to other areas, especially crewed missions. (2) Still need to reach out to other organizations to develop common understanding and vocabulary. Handbook doesn't/can't address all Workshop recommendations. Still need to identify how to address programmatic and infrastructure issues.

  1. Spacecraft fault tolerance: The Magellan experience

    NASA Technical Reports Server (NTRS)

    Kasuda, Rick; Packard, Donna Sexton

    1993-01-01

    Interplanetary and earth orbiting missions are now imposing unique fault tolerant requirements upon spacecraft design. Mission success is the prime motivator for building spacecraft with fault tolerant systems. The Magellan spacecraft had many such requirements imposed upon its design. Magellan met these requirements by building redundancy into all the major subsystem components and designing the onboard hardware and software with the capability to detect a fault, isolate it to a component, and issue commands to achieve a back-up configuration. This discussion is limited to fault protection, which is the autonomous capability to respond to a fault. The Magellan fault protection design is discussed, as well as the developmental and flight experiences and a summary of the lessons learned.

  2. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  3. Fault management for the Space Station Freedom control center

    NASA Technical Reports Server (NTRS)

    Clark, Colin; Jowers, Steven; Mcnenny, Robert; Culbert, Chris; Kirby, Sarah; Lauritsen, Janet

    1992-01-01

    This paper describes model based reasoning fault isolation in complex systems using automated digraph analysis. It discusses the use of the digraph representation as the paradigm for modeling physical systems and a method for executing these failure models to provide real-time failure analysis. It also discusses the generality, ease of development and maintenance, complexity management, and susceptibility to verification and validation of digraph failure models. It specifically describes how a NASA-developed digraph evaluation tool and an automated process working with that tool can identify failures in a monitored system when supplied with one or more fault indications. This approach is well suited to commercial applications of real-time failure analysis in complex systems because it is both powerful and cost effective.

  4. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis.

    PubMed

    Castro, Alfonso; Sedano, Andrés A; García, Fco Javier; Villoslada, Eduardo; Villagrá, Víctor A

    2017-12-28

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica's global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam.

  5. Perspective View, Garlock Fault

    NASA Technical Reports Server (NTRS)

    2000-01-01

    California's Garlock Fault, marking the northwestern boundary of the Mojave Desert, lies at the foot of the mountains, running from the lower right to the top center of this image, which was created with data from NASA's shuttle Radar Topography Mission (SRTM), flown in February 2000. The data will be used by geologists studying fault dynamics and landforms resulting from active tectonics. These mountains are the southern end of the Sierra Nevada and the prominent canyon emerging at the lower right is Lone Tree canyon. In the distance, the San Gabriel Mountains cut across from the leftside of the image. At their base lies the San Andreas Fault which meets the Garlock Fault near the left edge at Tejon Pass. The dark linear feature running from lower right to upper left is State Highway 14 leading from the town of Mojave in the distance to Inyokern and the Owens Valley in the north. The lighter parallel lines are dirt roads related to power lines and the Los Angeles Aqueduct which run along the base of the mountains.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast

  6. NASA/Caltech Team Images Nepal Quake Fault Rupture, Surface Movements

    NASA Image and Video Library

    2015-05-04

    Using a combination of GPS-measured ground motion data, satellite radar data, and seismic observations from instruments distributed around the world, scientists have constructed preliminary estimates of how much the fault responsible for the April 25, 2015, magnitude 7.8 Gorkha earthquake in Nepal moved below Earth's surface (Figure 1). This information is useful for understanding not only what happened in the earthquake but also the potential for future events. It can also be used to infer a map of how Earth's surface moved due to the earthquake over a broader region (Figure 2). The maps created from these data can be viewed at PIA19384. In the first figure, the modeled slip on the fault is shown as viewed from above and indicated by the colors and contours within the rectangle. The peak slip in the fault exceeds 19.7 feet (6 meters). The ground motion measured with GPS is shown by the red and purple arrows and was used to develop the fault slip model. In the second figure, color represents vertical movement and the scaled arrows indicate direction and magnitude of horizontal movement. In both figures, aftershocks are indicated by red dots. Background color and shaded relief reflect regional variations in topography. The barbed lines show where the main fault reaches Earth's surface. The main fault dives northward into the Earth below the Himalaya. http://photojournal.jpl.nasa.gov/catalog/PIA19384

  7. Moving Closer to EarthScope: A Major New Initiative for the Earth Sciences*

    NASA Astrophysics Data System (ADS)

    Simpson, D.; Blewitt, G.; Ekstrom, G.; Henyey, T.; Hickman, S.; Prescott, W.; Zoback, M.

    2002-12-01

    EarthScope is a scientific research and infrastructure initiative designed to provide a suite of new observational facilities to address fundamental questions about the evolution of continents and the processes responsible for earthquakes and volcanic eruptions. The integrated observing systems that will comprise EarthScope capitalize on recent developments in sensor technology and communications to provide Earth scientists with synoptic and high-resolution data derived from a variety of geophysical sensors. An array of 400 broadband seismometers will spend more than ten years crossing the contiguous 48 states and Alaska to image features that make up the internal structure of the continent and underlying mantle. Additional seismic and electromagnetic instrumentation will be available for high resolution imaging of geological targets of special interest. A network of continuously recording Global Positioning System (GPS) receivers and sensitive borehole strainmeters will be installed along the western U.S. plate boundary. These sensors will measure how western North America is deforming, what motions occur along faults, how earthquakes start, and how magma flows beneath active volcanoes. A four-kilometer deep observatory bored directly into the San Andreas fault will provide the first opportunity to observe directly the conditions under which earthquakes occur, to collect fault rocks and fluids for laboratory study, and to monitor continuously an active fault zone at depth. All data from the EarthScope facilities will be openly available in real-time to maximize participation from the scientific community and to provide on-going educational outreach to students and the public. EarthScope's sensors will revolutionize observational Earth science in terms of the quantity, quality and spatial extent of the data they provide. Turning these data into exciting scientific discovery will require new modes of experimentation and interdisciplinary cooperation from the Earth

  8. MER surface fault protection system

    NASA Technical Reports Server (NTRS)

    Neilson, Tracy

    2005-01-01

    The Mars Exploration Rovers surface fault protection design was influenced by the fact that the solar-powered rovers must recharge their batteries during the day to survive the night. the rovers needed to autonomously maintain thermal stability, initiate safe and reliable communication with orbiting assets or directly to Earth, while maintaining energy balance. This paper will describe the system fault protection design for the surface phase of the mission.

  9. Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    2014-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe themore » distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  10. Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis

    PubMed Central

    Castro, Alfonso; Sedano, Andrés A.; García, Fco. Javier; Villoslada, Eduardo

    2017-01-01

    Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica’s global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam. PMID:29283398

  11. Fault-Related Sanctuaries

    NASA Astrophysics Data System (ADS)

    Piccardi, L.

    2001-12-01

    Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

  12. Mechanism of Earth Fissures in Beijing,China

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Gong, H.; Gu, Z.; Wang, R.; Jia, S.; Li, X.

    2013-12-01

    Earth fissure is one of the natural hazards that can occur due to different mechanisms. The Beijing city, located in the north of North China Plain, China, has undergone extensive fissuring for the last twenty years. These fissures have caused serious damages to homes, farmlands and infrastructures. The previous investigation shows the distribution and direction of the major earth fissures mostly paralleled to the active fault, such as Huangzhuang-Gaoliying Fault. Hence, tectonic movements were thought to be the major cause of the fissuring in this region. But the subsidence caused by overdraft and other geological, hydrological and mechanical factors may also play important roles in forming earth fissure. The purpose of the work was to further explores the reason for the cause of the earth fissures and their mechanism of formations using field investigations, geophysical surveys, geotechnical tests and numerical analysis. The results indicated that over-extraction groundwater and differential subsidence are the major causes of the fissuring. The active faulting and fault zone provided or created an ideal condition for stress to accumulate. The earth fissures occur at times when the accumulated stress exceed the strength of soil or coupled with other process by which the strength of soil material is reduced. Survey and simulated results reveal the complex pattern of earth fissure including tensile deformation, vertical offset and rotation. The potential locations for future damage were also evaluated. Keywords: Earth Fissure; Mechanism; Beijing; Subsidence; tectonic movement; Geophysical survey

  13. Fault tolerant data management system

    NASA Technical Reports Server (NTRS)

    Gustin, W. M.; Smither, M. A.

    1972-01-01

    Described in detail are: (1) results obtained in modifying the onboard data management system software to a multiprocessor fault tolerant system; (2) a functional description of the prototype buffer I/O units; (3) description of modification to the ACADC and stimuli generating unit of the DTS; and (4) summaries and conclusions on techniques implemented in the rack and prototype buffers. Also documented is the work done in investigating techniques of high speed (5 Mbps) digital data transmission in the data bus environment. The application considered is a multiport data bus operating with the following constraints: no preferred stations; random bus access by all stations; all stations equally likely to source or sink data; no limit to the number of stations along the bus; no branching of the bus; and no restriction on station placement along the bus.

  14. The Development of NASA's Fault Management Handbook

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine

    2011-01-01

    Disciplined approach to Fault Management (FM) has not always been emphasized by projects, contributing to major schedule and cost overruns. Progress is being made on a number of fronts outside of Handbook effort: (1) Processes, Practices and Tools being developed at some Centers and Institutions (2) Management recognition . Constellation FM roles, Discovery/New Frontiers mission reviews (3) Potential Technology solutions . New approaches could avoid many current pitfalls (3a) New FM architectures, including model ]based approach integrated with NASA fs MBSE efforts (3b) NASA fs Office of the Chief Technologist: FM identified in seven of NASA fs 14 Space Technology Roadmaps . opportunity to coalesce and establish thrust area to progressively develop new FM techniques FM Handbook will help ensure that future missions do not encounter same FM ]related problems as previous missions Version 1 of the FM Handbook is a good start.

  15. Assurance of Fault Management: Risk-Significant Adverse Condition Awareness

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2016-01-01

    Fault Management (FM) systems are ranked high in risk-based assessment of criticality within flight software, emphasizing the importance of establishing highly competent domain expertise to provide assurance for NASA projects, especially as spaceflight systems continue to increase in complexity. Insight into specific characteristics of FM architectures seen embedded within safety- and mission-critical software systems analyzed by the NASA Independent Verification Validation (IVV) Program has been enhanced with an FM Technical Reference (TR) suite. Benefits are aimed beyond the IVV community to those that seek ways to efficiently and effectively provide software assurance to reduce the FM risk posture of NASA and other space missions. The identification of particular FM architectures, visibility, and associated IVV techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. The role FM has with regard to overall asset protection of flight software systems is being addressed with the development of an adverse condition (AC) database encompassing flight software vulnerabilities.Identification of potential off-nominal conditions and analysis to determine how a system responds to these conditions are important aspects of hazard analysis and fault management. Understanding what ACs the mission may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Research efforts sponsored by NASAs Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs, and allowing queries based on project, mission

  16. Evolution of shuttle avionics redundancy management/fault tolerance

    NASA Technical Reports Server (NTRS)

    Boykin, J. C.; Thibodeau, J. R.; Schneider, H. E.

    1985-01-01

    The challenge of providing redundancy management (RM) and fault tolerance to meet the Shuttle Program requirements of fail operational/fail safe for the avionics systems was complicated by the critical program constraints of weight, cost, and schedule. The basic and sometimes false effectivity of less than pure RM designs is addressed. Evolution of the multiple input selection filter (the heart of the RM function) is discussed with emphasis on the subtle interactions of the flight control system that were found to be potentially catastrophic. Several other general RM development problems are discussed, with particular emphasis on the inertial measurement unit RM, indicative of the complexity of managing that three string system and its critical interfaces with the guidance and control systems.

  17. Anisotropy of Earth's D'' layer and stacking faults in the MgSiO3 post-perovskite phase.

    PubMed

    Oganov, Artem R; Martonák, Roman; Laio, Alessandro; Raiteri, Paolo; Parrinello, Michele

    2005-12-22

    The post-perovskite phase of (Mg,Fe)SiO3 is believed to be the main mineral phase of the Earth's lowermost mantle (the D'' layer). Its properties explain numerous geophysical observations associated with this layer-for example, the D'' discontinuity, its topography and seismic anisotropy within the layer. Here we use a novel simulation technique, first-principles metadynamics, to identify a family of low-energy polytypic stacking-fault structures intermediate between the perovskite and post-perovskite phases. Metadynamics trajectories identify plane sliding involving the formation of stacking faults as the most favourable pathway for the phase transition, and as a likely mechanism for plastic deformation of perovskite and post-perovskite. In particular, the predicted slip planes are {010} for perovskite (consistent with experiment) and {110} for post-perovskite (in contrast to the previously expected {010} slip planes). Dominant slip planes define the lattice preferred orientation and elastic anisotropy of the texture. The {110} slip planes in post-perovskite require a much smaller degree of lattice preferred orientation to explain geophysical observations of shear-wave anisotropy in the D'' layer.

  18. Earthquake Nucleation and Fault Slip: Possible Experiments on a Natural Fault

    NASA Astrophysics Data System (ADS)

    Germanovich, L. N.; Murdoch, L. C.; Garagash, D.; Reches, Z.; Martel, S. J.; Johnston, M. J.; Ebenhack, J.; Gwaba, D.

    2011-12-01

    High-resolution deformation and seismic observations are usually made only near the Earths' surface, kilometers away from where earthquake nucleate on active faults and are limited by inverse-cube-distance attenuation and ground noise. We have developed an experimental approach that aims at reactivating faults in-situ using thermal techniques and fluid injection, which modify in-situ stresses and the fault strength until the fault slips. Mines where in-situ stresses are sufficient to drive faulting present an opportunity to conduct such experiments. The former Homestake gold mine in South Dakota is a good example. During our recent field work in the Homestake mine, we found a large fault that intersects multiple mine levels. The size and distinct structure of this fault make it a promising target for in-situ reactivation, which would likely to be localized on a crack-like patch. Slow patch propagation, moderated by the injection rate and the rate of change of the background stresses, may become unstable, leading to the nucleation of a dynamic earthquake rupture. Our analyses for the Homestake fault conditions indicate that this transition occurs for a patch size ~1 m. This represents a fundamental limitation for laboratory experiments and necessitates larger-scale field tests ~10-100 m. The opportunity to observe earthquake nucleation on the Homestake Fault is feasible because slip could be initiated at a pre-defined location and time with instrumentation placed as close as a few meters from the nucleation site. Designing the experiment requires a detailed assessment of the state-of-stress in the vicinity of the fault. This is being conducted by simulating changes in pore pressure and effective stresses accompanying dewatering of the mine, and by evaluating in-situ stress measurements in light of a regional stress field modified by local perturbations caused by the mine workings.

  19. Complex Plate Tectonic Features on Planetary Bodies: Analogs from Earth

    NASA Astrophysics Data System (ADS)

    Stock, J. M.; Smrekar, S. E.

    2016-12-01

    We review the types and scales of observations needed on other rocky planetary bodies (e.g., Mars, Venus, exoplanets) to evaluate evidence of present or past plate motions. Earth's plate boundaries were initially simplified into three basic types (ridges, trenches, and transform faults). Previous studies examined the Moon, Mars, Venus, Mercury and icy moons such as Europa, for evidence of features, including linear rifts, arcuate convergent zones, strike-slip faults, and distributed deformation (rifting or folding). Yet, several aspects merit further consideration. 1) Is the feature active or fossil? Earth's active mid ocean ridges are bathymetric highs, and seafloor depth increases on either side; whereas, fossil mid ocean ridges may be as deep as the surrounding abyssal plain with no major rift valley, although with a minor gravity low (e.g., Osbourn Trough, W. Pacific Ocean). Fossil trenches have less topographic relief than active trenches (e.g., the fossil trench along the Patton Escarpment, west of California). 2) On Earth, fault patterns of spreading centers depend on volcanism. Excess volcanism reduced faulting. Fault visibility increases as spreading rates slow, or as magmatism decreases, producing high-angle normal faults parallel to the spreading center. At magma-poor spreading centers, high resolution bathymetry shows low angle detachment faults with large scale mullions and striations parallel to plate motion (e.g., Mid Atlantic Ridge, Southwest Indian Ridge). 3) Sedimentation on Earth masks features that might be visible on a non-erosional planet. Subduction zones on Earth in areas of low sedimentation have clear trench -parallel faults causing flexural deformation of the downgoing plate; in highly sedimented subduction zones, no such faults can be seen, and there may be no bathymetric trench at all. 4) Areas of Earth with broad upwelling, such as the North Fiji Basin, have complex plate tectonic patterns with many individual but poorly linked ridge

  20. Illuminating Northern California’s Active Faults

    USGS Publications Warehouse

    Prentice, Carol S.; Crosby, Christopher J.; Whitehill, Caroline S.; Arrowsmith, J. Ramon; Furlong, Kevin P.; Philips, David A.

    2009-01-01

    Newly acquired light detection and ranging (lidar) topographic data provide a powerful community resource for the study of landforms associated with the plate boundary faults of northern California (Figure 1). In the spring of 2007, GeoEarthScope, a component of the EarthScope Facility construction project funded by the U.S. National Science Foundation, acquired approximately 2000 square kilometers of airborne lidar topographic data along major active fault zones of northern California. These data are now freely available in point cloud (x, y, z coordinate data for every laser return), digital elevation model (DEM), and KMZ (zipped Keyhole Markup Language, for use in Google EarthTM and other similar software) formats through the GEON OpenTopography Portal (http://www.OpenTopography.org/data). Importantly, vegetation can be digitally removed from lidar data, producing high-resolution images (0.5- or 1.0-meter DEMs) of the ground surface beneath forested regions that reveal landforms typically obscured by vegetation canopy (Figure 2)

  1. Airborne hunt for faults in the Portland-Vancouver area

    USGS Publications Warehouse

    Blakely, Richard J.; Wells, Ray E.; Yelin, Thomas S.; Stauffer, Peter H.; Hendley, James W.

    1996-01-01

    Geologic hazards in the Portland-Vancouver area include faults entirely hidden by river sediments, vegetation, and urban development. A recent aerial geophysical survey revealed patterns in the Earth's magnetic field that confirm the existence of a previously suspected fault running through Portland. It also indicated that this fault may pose a significant seismic threat. This discovery has enabled the residents of the populous area to better prepare for future earthquakes.

  2. Earthquake-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory

    NASA Astrophysics Data System (ADS)

    Xu, Changyi; Sun, Wenke

    2014-12-01

    In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra earthquake (Mw9.3), the 2010 Chile earthquake (Mw8.8), the 2011 Tohoku-Oki earthquake (Mw9.0) and the 2013 Okhotsk Sea earthquake (Mw8.3). The results show that mega-thrust earthquakes make the Earth expand and earthquakes along a normal fault make the Earth contract. We compare the volume changes computed for finite fault models and a point source of the 2011 Tohoku-Oki earthquake (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the earthquakes' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical earthquakes (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding rate about 0.011 mm yr-1.

  3. Strong ground motions generated by earthquakes on creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  4. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the

  5. Active faults in Africa: a review

    NASA Astrophysics Data System (ADS)

    Skobelev, S. F.; Hanon, M.; Klerkx, J.; Govorova, N. N.; Lukina, N. V.; Kazmin, V. G.

    2004-03-01

    The active fault database and Map of active faults in Africa, in scale of 1:5,000,000, were compiled according to the ILP Project II-2 "World Map of Major Active Faults". The data were collected in the Royal Museum of Central Africa, Tervuren, Belgium, and in the Geological Institute, Moscow, where the final edition was carried out. Active faults of Africa form three groups. The first group is represented by thrusts and reverse faults associated with compressed folds in the northwest Africa. They belong to the western part of the Alpine-Central Asian collision belt. The faults disturb only the Earth's crust and some of them do not penetrate deeper than the sedimentary cover. The second group comprises the faults of the Great African rift system. The faults form the known Western and Eastern branches, which are rifts with abnormal mantle below. The deep-seated mantle "hot" anomaly probably relates to the eastern volcanic branch. In the north, it joins with the Aden-Red Sea rift zone. Active faults in Egypt, Libya and Tunis may represent a link between the East African rift system and Pantellerian rift zone in the Mediterranean. The third group included rare faults in the west of Equatorial Africa. The data were scarce, so that most of the faults of this group were identified solely by interpretation of space imageries and seismicity. Some longer faults of the group may continue the transverse faults of the Atlantic and thus can penetrate into the mantle. This seems evident for the Cameron fault line.

  6. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  7. San Andreas Fault in the Carrizo Plain

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The 1,200-kilometer (800-mile)San Andreas is the longest fault in California and one of the longest in North America. This perspective view of a portion of the fault was generated using data from the Shuttle Radar Topography Mission (SRTM), which flew on NASA's Space Shuttle last February, and an enhanced, true-color Landsat satellite image. The view shown looks southeast along the San Andreas where it cuts along the base of the mountains in the Temblor Range near Bakersfield. The fault is the distinctively linear feature to the right of the mountains. To the left of the range is a portion of the agriculturally rich San Joaquin Valley. In the background is the snow-capped peak of Mt. Pinos at an elevation of 2,692 meters (8,831 feet). The complex topography in the area is some of the most spectacular along the course of the fault. To the right of the fault is the famous Carrizo Plain. Dry conditions on the plain have helped preserve the surface trace of the fault, which is scrutinized by both amateur and professional geologists. In 1857, one of the largest earthquakes ever recorded in the United States occurred just north of the Carrizo Plain. With an estimated magnitude of 8.0, the quake severely shook buildings in Los Angeles, caused significant surface rupture along a 350-kilometer (220-mile) segment of the fault, and was felt as far away as Las Vegas, Nev. This portion of the San Andreas is an important area of study for seismologists. For visualization purposes, topographic heights displayed in this image are exaggerated two times.

    The elevation data used in this image was acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's land surface. To collect the 3-D SRTM data, engineers added a mast 60

  8. Fault-tolerant processing system

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L. (Inventor)

    1996-01-01

    A fault-tolerant, fiber optic interconnect, or backplane, which serves as a via for data transfer between modules. Fault tolerance algorithms are embedded in the backplane by dividing the backplane into a read bus and a write bus and placing a redundancy management unit (RMU) between the read bus and the write bus so that all data transmitted by the write bus is subjected to the fault tolerance algorithms before the data is passed for distribution to the read bus. The RMU provides both backplane control and fault tolerance.

  9. LiDAR-Assisted identification of an active fault near Truckee, California

    USGS Publications Warehouse

    Hunter, L.E.; Howle, J.F.; Rose, R.S.; Bawden, G.W.

    2011-01-01

    We use high-resolution (1.5-2.4 points/m2) bare-earth airborne Light Detection and Ranging (LiDAR) imagery to identify, map, constrain, and visualize fault-related geomorphology in densely vegetated terrain surrounding Martis Creek Dam near Truckee, California. Bare-earth LiDAR imagery reveals a previously unrecognized and apparently youthful right-lateral strike-slip fault that exhibits laterally continuous tectonic geomorphic features over a 35-km-long zone. If these interpretations are correct, the fault, herein named the Polaris fault, may represent a significant seismic hazard to the greater Truckee-Lake Tahoe and Reno-Carson City regions. Three-dimensional modeling of an offset late Quaternary terrace riser indicates a minimum tectonic slip rate of 0.4 ?? 0.1 mm/yr.Mapped fault patterns are fairly typical of regional patterns elsewhere in the northern Walker Lane and are in strong coherence with moderate magnitude historical seismicity of the immediate area, as well as the current regional stress regime. Based on a range of surface-rupture lengths and depths to the base of the seismogenic zone, we estimate a maximum earthquake magnitude (M) for the Polaris fault to be between 6.4 and 6.9.

  10. A wideband magnetoresistive sensor for monitoring dynamic fault slip in laboratory fault friction experiments

    USGS Publications Warehouse

    Kilgore, Brian D.

    2017-01-01

    A non-contact, wideband method of sensing dynamic fault slip in laboratory geophysical experiments employs an inexpensive magnetoresistive sensor, a small neodymium rare earth magnet, and user built application-specific wideband signal conditioning. The magnetoresistive sensor generates a voltage proportional to the changing angles of magnetic flux lines, generated by differential motion or rotation of the near-by magnet, through the sensor. The performance of an array of these sensors compares favorably to other conventional position sensing methods employed at multiple locations along a 2 m long × 0.4 m deep laboratory strike-slip fault. For these magnetoresistive sensors, the lack of resonance signals commonly encountered with cantilever-type position sensor mounting, the wide band response (DC to ≈ 100 kHz) that exceeds the capabilities of many traditional position sensors, and the small space required on the sample, make them attractive options for capturing high speed fault slip measurements in these laboratory experiments. An unanticipated observation of this study is the apparent sensitivity of this sensor to high frequency electomagnetic signals associated with fault rupture and (or) rupture propagation, which may offer new insights into the physics of earthquake faulting.

  11. Management Approach for NASA's Earth Venture-1 (EV-1) Airborne Science Investigations

    NASA Technical Reports Server (NTRS)

    Guillory, Anthony R.; Denkins, Todd C.; Allen, B. Danette

    2013-01-01

    The Earth System Science Pathfinder (ESSP) Program Office (PO) is responsible for programmatic management of National Aeronautics and Space Administration's (NASA) Science Mission Directorate's (SMD) Earth Venture (EV) missions. EV is composed of both orbital and suborbital Earth science missions. The first of the Earth Venture missions is EV-1, which are Principal Investigator-led, temporally-sustained, suborbital (airborne) science investigations costcapped at $30M each over five years. Traditional orbital procedures, processes and standards used to manage previous ESSP missions, while effective, are disproportionally comprehensive for suborbital missions. Conversely, existing airborne practices are primarily intended for smaller, temporally shorter investigations, and traditionally managed directly by a program scientist as opposed to a program office such as ESSP. In 2010, ESSP crafted a management approach for the successful implementation of the EV-1 missions within the constructs of current governance models. NASA Research and Technology Program and Project Management Requirements form the foundation of the approach for EV-1. Additionally, requirements from other existing NASA Procedural Requirements (NPRs), systems engineering guidance and management handbooks were adapted to manage programmatic, technical, schedule, cost elements and risk. As the EV-1 missions are nearly at the end of their successful execution and project lifecycle and the submission deadline of the next mission proposals near, the ESSP PO is taking the lessons learned and updated the programmatic management approach for all future Earth Venture Suborbital (EVS) missions for an even more flexible and streamlined management approach.

  12. Alpine Fault, New Zealand, SRTM Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Alpine fault runs parallel to, and just inland of, much of the west coast of New Zealand's South Island. This view was created from the near-global digital elevation model produced by the Shuttle Radar Topography Mission (SRTM) and is almost 500 kilometers (just over 300 miles) wide. Northwest is toward the top. The fault is extremely distinct in the topographic pattern, nearly slicing this scene in half lengthwise.

    In a regional context, the Alpine fault is part of a system of faults that connects a west dipping subduction zone to the northeast with an east dipping subduction zone to the southwest, both of which occur along the juncture of the Indo-Australian and Pacific tectonic plates. Thus, the fault itself constitutes the major surface manifestation of the plate boundary here. Offsets of streams and ridges evident in the field, and in this view of SRTM data, indicate right-lateral fault motion. But convergence also occurs across the fault, and this causes the continued uplift of the Southern Alps, New Zealand's largest mountain range, along the southeast side of the fault.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast (image top to bottom) direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect

  13. Active faults newly identified in Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2012-05-01

    The Bellingham Basin, which lies north of Seattle and south of Vancouver around the border between the United States and Canada in the northern part of the Cascadia subduction zone, is important for understanding the regional tectonic setting and current high rates of crustal deformation in the Pacific Northwest. Using a variety of new data, Kelsey et al. identified several active faults in the Bellingham Basin that had not been previously known. These faults lie more than 60 kilometers farther north of the previously recognized northern limit of active faulting in the area. The authors note that the newly recognized faults could produce earthquakes with magnitudes between 6 and 6.5 and thus should be considered in hazard assessments for the region. (Journal of Geophysical Reserch-Solid Earth, doi:10.1029/2011JB008816, 2012)

  14. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  15. Fault orientations in extensional and conjugate strike-slip environments and their implications

    USGS Publications Warehouse

    Thatcher, W.; Hill, D.P.

    1991-01-01

    Seismically active conjugate strike-slip faults in California and Japan typically have mutually orthogonal right- and left-lateral fault planes. Normal-fault dips at earthquake nucleation depths are concentrated between 40?? and 50??. The observed orientations and their strong clustering are surprising, because conventional faulting theory suggests fault initiation with conjugate 60?? and 120?? intersecting planes and 60?? normal-fault dip or fault reactivation with a broad range of permitted orientations. The observations place new constraints on the mechanics of fault initiation, rotation, and evolutionary development. We speculate that the data could be explained by fault rotation into the observed orientations and deactivation for greater rotation or by formation of localized shear zones beneath the brittle-ductile transition in Earth's crust. Initiation as weak frictional faults seems unlikely. -Authors

  16. Fault diagnosis

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  17. How Do Normal Faults Grow?

    NASA Astrophysics Data System (ADS)

    Jackson, C. A. L.; Bell, R. E.; Rotevatn, A.; Tvedt, A. B. M.

    2015-12-01

    Normal faulting accommodates stretching of the Earth's crust and is one of the fundamental controls on landscape evolution and sediment dispersal in rift basins. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate that, in the case of seismic-scale growth faults, growth strata thickness patterns and relay zone kinematics, rather than displacement backstripping, should be assessed to directly constrain

  18. Product quality management based on CNC machine fault prognostics and diagnosis

    NASA Astrophysics Data System (ADS)

    Kozlov, A. M.; Al-jonid, Kh M.; Kozlov, A. A.; Antar, Sh D.

    2018-03-01

    This paper presents a new fault classification model and an integrated approach to fault diagnosis which involves the combination of ideas of Neuro-fuzzy Networks (NF), Dynamic Bayesian Networks (DBN) and Particle Filtering (PF) algorithm on a single platform. In the new model, faults are categorized in two aspects, namely first and second degree faults. First degree faults are instantaneous in nature, and second degree faults are evolutional and appear as a developing phenomenon which starts from the initial stage, goes through the development stage and finally ends at the mature stage. These categories of faults have a lifetime which is inversely proportional to a machine tool's life according to the modified version of Taylor’s equation. For fault diagnosis, this framework consists of two phases: the first one is focusing on fault prognosis, which is done online, and the second one is concerned with fault diagnosis which depends on both off-line and on-line modules. In the first phase, a neuro-fuzzy predictor is used to take a decision on whether to embark Conditional Based Maintenance (CBM) or fault diagnosis based on the severity of a fault. The second phase only comes into action when an evolving fault goes beyond a critical threshold limit called a CBM limit for a command to be issued for fault diagnosis. During this phase, DBN and PF techniques are used as an intelligent fault diagnosis system to determine the severity, time and location of the fault. The feasibility of this approach was tested in a simulation environment using the CNC machine as a case study and the results were studied and analyzed.

  19. Model Meets Data: Challenges and Opportunities to Implement Land Management in Earth System Models

    NASA Astrophysics Data System (ADS)

    Pongratz, J.; Dolman, A. J.; Don, A.; Erb, K. H.; Fuchs, R.; Herold, M.; Jones, C.; Luyssaert, S.; Kuemmerle, T.; Meyfroidt, P.

    2016-12-01

    Land-based demand for food and fibre is projected to increase in the future. In light of global sustainability challenges only part of this increase will be met by expansion of land use into relatively untouched regions. Additional demand will have to be fulfilled by intensification and other adjustments in management of land that already is under agricultural and forestry use. Such land management today occurs on about half of the ice-free land surface, as compared to only about one quarter that has undergone a change in land cover. As the number of studies revealing substantial biogeophysical and biogeochemical effects of land management is increasing, moving beyond land cover change towards including land management has become a key focus for Earth system modeling. However, a basis for prioritizing land management activities for implementation in models is lacking. We lay this basis for prioritization in a collaborative project across the disciplines of Earth system modeling, land system science, and Earth observation. We first assess the status and plans of implementing land management in Earth system and dynamic global vegetation models. A clear trend towards higher complexity of land use representation is visible. We then assess five criteria for prioritizing the implementation of land management activities: (1) spatial extent, (2) evidence for substantial effects on the Earth system, (3) process understanding, (4) possibility to link the management activity to existing concepts and structures of models, (5) availability of data required as model input. While the first three criteria have been assessed by an earlier study for ten common management activities, we review strategies for implementation in models and the availability of required datasets. We can thus evaluate the management activities for their performance in terms of importance for the Earth system, possibility of technical implementation in models, and data availability. This synthesis reveals

  20. Model meets data: Challenges and opportunities to implement land management in Earth System Models

    NASA Astrophysics Data System (ADS)

    Pongratz, Julia; Dolman, Han; Don, Axel; Erb, Karl-Heinz; Fuchs, Richard; Herold, Martin; Jones, Chris; Luyssaert, Sebastiaan; Kuemmerle, Tobias; Meyfroidt, Patrick; Naudts, Kim

    2017-04-01

    Land-based demand for food and fibre is projected to increase in the future. In light of global sustainability challenges only part of this increase will be met by expansion of land use into relatively untouched regions. Additional demand will have to be fulfilled by intensification and other adjustments in management of land that already is under agricultural and forestry use. Such land management today occurs on about half of the ice-free land surface, as compared to only about one quarter that has undergone a change in land cover. As the number of studies revealing substantial biogeophysical and biogeochemical effects of land management is increasing, moving beyond land cover change towards including land management has become a key focus for Earth system modeling. However, a basis for prioritizing land management activities for implementation in models is lacking. We lay this basis for prioritization in a collaborative project across the disciplines of Earth system modeling, land system science, and Earth observation. We first assess the status and plans of implementing land management in Earth system and dynamic global vegetation models. A clear trend towards higher complexity of land use representation is visible. We then assess five criteria for prioritizing the implementation of land management activities: (1) spatial extent, (2) evidence for substantial effects on the Earth system, (3) process understanding, (4) possibility to link the management activity to existing concepts and structures of models, (5) availability of data required as model input. While the first three criteria have been assessed by an earlier study for ten common management activities, we review strategies for implementation in models and the availability of required datasets. We can thus evaluate the management activities for their performance in terms of importance for the Earth system, possibility of technical implementation in models, and data availability. This synthesis reveals

  1. Perspective view, Landsat overlay San Andreas Fault, Palmdale, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault. This segment of the fault lies near the city of Palmdale, California (the flat area in the right half of the image) about 60 kilometers (37 miles) north of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. The Lake Palmdale Reservoir, approximately 1.5 kilometers (0.9 miles) across, sits in the topographic depression created by past movement along the fault. Highway 14 is the prominent linear feature starting at the lower left edge of the image and continuing along the far side of the reservoir. The patterns of residential and agricultural development around Palmdale are seen in the Landsat imagery in the right half of the image. SRTM topographic data will be used by geologists studying fault dynamics and landforms resulting from active tectonics.

    This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture

  2. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  3. How do normal faults grow?

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher; Bell, Rebecca; Rotevatn, Atle; Tvedt, Anette

    2016-04-01

    Normal faulting accommodates stretching of the Earth's crust, and it is arguably the most fundamental tectonic process leading to continent rupture and oceanic crust emplacement. Furthermore, the incremental and finite geometries associated with normal faulting dictate landscape evolution, sediment dispersal and hydrocarbon systems development in rifts. Displacement-length scaling relationships compiled from global datasets suggest normal faults grow via a sympathetic increase in these two parameters (the 'isolated fault model'). This model has dominated the structural geology literature for >20 years and underpins the structural and tectono-stratigraphic models developed for active rifts. However, relatively recent analysis of high-quality 3D seismic reflection data suggests faults may grow by rapid establishment of their near-final length prior to significant displacement accumulation (the 'coherent fault model'). The isolated and coherent fault models make very different predictions regarding the tectono-stratigraphic evolution of rift basin, thus assessing their applicability is important. To-date, however, very few studies have explicitly set out to critically test the coherent fault model thus, it may be argued, it has yet to be widely accepted in the structural geology community. Displacement backstripping is a simple graphical technique typically used to determine how faults lengthen and accumulate displacement; this technique should therefore allow us to test the competing fault models. However, in this talk we use several subsurface case studies to show that the most commonly used backstripping methods (the 'original' and 'modified' methods) are, however, of limited value, because application of one over the other requires an a priori assumption of the model most applicable to any given fault; we argue this is illogical given that the style of growth is exactly what the analysis is attempting to determine. We then revisit our case studies and demonstrate

  4. The Talas-Fergana Fault, Kirghiz and Kazakh, USSR

    USGS Publications Warehouse

    Wallace, R.E.

    1976-01-01

    The great Talas-Fergana fault transects the Soviet republic of Kirghiz in Soviet Central Asia and extends southeastward into China and northwestward into Kazakh SSR (figs. 1 and 2). This great rupture in the Earth's crust rivals the San Andreas fault in California; it is long (approximately 900 kilometers), complex, and possibly has a lateral displacement of hundreds of kilometers similar to that on the San Andreas fault. The Soviet geologist V. S. Burtman suggested that right-lateral offset of 250 kilometers has occurred, citing a shift of Devonian rocks as evidence (fig. 3). By no means do all Soviet geologists agree. Some hold the view that there is no lateral displacement along the Talas-Fergana fault and that the anomalous distribution of Paleozoic rocks is a result of the original position of deposition. 

  5. Disease management programmes in Germany: a fundamental fault.

    PubMed

    Felder, Stefan

    2006-12-01

    In 2001 Germany introduced disease management programmes (DMPs) in order to give sick funds an incentive to improve the treatment of the chronically ill. By 1 March 2005, a total of 3275 programmes had been approved, 2760 for diabetes, 390 for breast cancer and 125 for coronary heart disease, covering roughly 1 million patients. German DMPs show a major fault regarding financial incentives. Sick funds increase their transfers from the risk adjustment scheme when their clients enroll in DMPs. Since this money is a lump sum, sick funds do not necessarily foster treatment of the chronically ill. Similarly, reimbursement of physicians is also not well targeted to the needs of DMPs. Preliminary evidence points to poor performance of German DMPs.

  6. An operational, multistate, earth observation data management system

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Hill, C. T.; Morgan, R. P.; Gohagan, J. K.; Hays, T. R.; Ballard, R. J.; Crnkovich, G. G.; Schaeffer, M. A.

    1977-01-01

    State, local, and regional agencies involved in natural resources management were investigated as potential users of satellite remotely sensed data. This group's needs are assessed and alternative data management systems serving some of those needs are outlined. It is concluded that an operational earth observation data management system will be of most use to these user agencies if it provides a full range of information services -- from raw data acquisition to interpretation and dissemination of final information products.

  7. Local precision nets for monitoring movements of faults and large engineering structures

    NASA Technical Reports Server (NTRS)

    Henneberg, H. G.

    1978-01-01

    Along Bocono Fault were installed local high precision geodetic nets to observe the possible horizontal crustal deformations and movements. In the fault area there are few big structures which are also included in the mentioned investigation. In the near future, measurements shall be extended to other sites of Bocono Fault and also to the El Pilar Fault. In the same way and by similar methods high precision geodetic nets are applied in Venezuela to observe the behavior of big structures, as bridges and large dams and of earth surface deformations due to industrial activities.

  8. Multi-temporal mapping of a large, slow-moving earth flow for kinematic interpretation

    USGS Publications Warehouse

    Guerriero, Luigi; Coe, Jeffrey A.; Revellino, Paola; Guadagno, Francesco M.

    2014-01-01

    Periodic movement of large, thick landslides on discrete basal surfaces produces modifications of the topographic surface, creates faults and folds, and influences the locations of springs, ponds, and streams (Baum, et al., 1993; Coe et al., 2009). The geometry of the basal-slip surface, which can be controlled by geological structures (e.g., fold axes, faults, etc.; Revellino et al., 2010; Grelle et al., 2011), and spatial variation in the rate of displacement, are responsible for differential deformation and kinematic segmentation of the landslide body. Thus, large landslides are often composed of several distinct kinematic elements. Each element represents a discrete kinematic domain within the main landslide that is broadly characterized by stretching (extension) of the upper part of the landslide and shortening (compression) near the landslide toe (Baum and Fleming, 1991; Guerriero et al., in review). On the basis of this knowledge, we used photo interpretive and GPS field mapping methods to map structures on the surface of the Montaguto earth flow in the Apennine Mountains of southern Italy at a scale of 1:6,000. (Guerriero et al., 2013a; Fig.1). The earth flow has been periodically active since at least 1954. The most extensive and destructive period of activity began on April 26, 2006, when an estimated 6 million m3 of material mobilized, covering and closing Italian National Road SS90, and damaging residential structures (Guerriero et al., 2013b). Our maps show the distribution and evolution of normal faults, thrust faults, strike-slip faults, flank ridges, and hydrological features at nine different dates (October, 1954; June, 1976; June, 1991; June, 2003; June, 2005; May, 2006; October, 2007; July, 2009; and March , 2010) between 1954 and 2010. Within the earth flow we recognized several kinematic elements and associated structures (Fig.2a). Within each kinematic element (e.g. the earth flow neck; Fig.2b), the flow velocity was highest in the middle, and

  9. Advanced cloud fault tolerance system

    NASA Astrophysics Data System (ADS)

    Sumangali, K.; Benny, Niketa

    2017-11-01

    Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

  10. Ultra-thin clay layers facilitate seismic slip in carbonate faults.

    PubMed

    Smeraglia, Luca; Billi, Andrea; Carminati, Eugenio; Cavallo, Andrea; Di Toro, Giulio; Spagnuolo, Elena; Zorzi, Federico

    2017-04-06

    Many earthquakes propagate up to the Earth's surface producing surface ruptures. Seismic slip propagation is facilitated by along-fault low dynamic frictional resistance, which is controlled by a number of physico-chemical lubrication mechanisms. In particular, rotary shear experiments conducted at seismic slip rates (1 ms -1 ) show that phyllosilicates can facilitate co-seismic slip along faults during earthquakes. This evidence is crucial for hazard assessment along oceanic subduction zones, where pelagic clays participate in seismic slip propagation. Conversely, the reason why, in continental domains, co-seismic slip along faults can propagate up to the Earth's surface is still poorly understood. We document the occurrence of micrometer-thick phyllosilicate-bearing layers along a carbonate-hosted seismogenic extensional fault in the central Apennines, Italy. Using friction experiments, we demonstrate that, at seismic slip rates (1 ms -1 ), similar calcite gouges with pre-existing phyllosilicate-bearing (clay content ≤3 wt.%) micro-layers weaken faster than calcite gouges or mixed calcite-phyllosilicate gouges. We thus propose that, within calcite gouge, ultra-low clay content (≤3 wt.%) localized along micrometer-thick layers can facilitate seismic slip propagation during earthquakes in continental domains, possibly enhancing surface displacement.

  11. Sorption of the Rare Earth Elements and Yttrium (REE-Y) in calcite: the mechanism of a new effective tool in identifying paleoearthquakes on carbonate faults

    NASA Astrophysics Data System (ADS)

    Moraetis, Daniel; Mouslopoulou, Vasiliki; Pratikakis, Alexandros

    2015-04-01

    A new tool for identifying paleoearthquakes on carbonate faults has been successfully tested on two carbonate faults in southern Europe (the Magnola Fault in Italy and the Spili Fault in Greece): the Rare Earth Element and Yttrium (REE-Y) method (Manighetti et al., 2010; Mouslopoulou et al., 2011). The method is based on the property of the calcite in limestone scarps to absorb the REE and Y from the soil during its residence beneath the ground surface (e.g. before its exhumation due to earthquakes). Although the method is established, the details of the enrichment mechanism are poorly investigated. Here we use published data together with new information from pot-experiments to shed light on the sorption mechanism and the time effectiveness of the REE-Y method. Data from the Magnola and Spili faults show that the average chemical enrichment is ~45%, in REE-Y while the denudation rate of the enriched zones is ~1% higher every 400 years due to exposure of the fault scarp in weathering. They also show that the chemical enrichment is significant even for short periods of residence time (e.g., ~100 years). To better understand the enrichment mechanism, we performed a series of pot experiments, where carbonate tiles extracted from the Spili Fault were buried into soil collected from the hanging-wall of the same fault. We irrigated the pots with artificial rain that equals 5 years of rainfall in Crete and at temperatures of 15oC and 25oC. Following, we performed sorption isotherm, kinetic and pH-edge tests for the europium (Eu), the cerium (Ce) and the ytterbium (Yt) that occur in the calcite minerals. The processes of adsorption and precipitation in the batch experiments are simulated by the Mineql software. The pot experiments indicate incorporation of the REE and Y into the surface of the carbonate tile which is in contact with the soil. The pH of the leached solution during the rain application range from 7.6 to 8.3. Nutrient release like Ca is higher in the leached

  12. Geophysical character of the intraplate Wabash Fault System from the Wabash EarthScope FlexArray

    NASA Astrophysics Data System (ADS)

    Conder, J. A.; Zhu, L.; Wood, J. D.

    2017-12-01

    The Wabash Seismic Array was an EarthScope funded FlexArray deployment across the Wabash Fault System. The Wabash system is long known for oil and gas production. The fault system is often characterized as an intraplate seismic zone as it has produced several earthquakes above M4 in the last 50 years and potentially several above M7 in the Holocene. While earthquakes are far less numerous in the Wabash system than in the nearby New Madrid seismic zone, the seismic moment is nearly twice that of New Madrid over the past 50 years. The array consisted of 45 broadband instruments deployed across the axis to study the larger structure and 3 smaller phased arrays of 9 short-period instruments each to get a better sense of the local seismic output of smaller events. First results from the northern phased array indicate that seismicity in the Wabash behaves markedly differently than in New Madrid, with a low b-value around 0.7. Receiver functions show a 50 km thick crust beneath the system, thickening somewhat to the west. A variable-depth, positive-amplitude conversion in the deep crust gives evidence for a rift pillow at the base of the system within a dense lowermost crustal layer. Low Vs and a moderate negative amplitude conversion in the mid crust suggest a possible weak zone that could localize deformation. Shear wave splitting shows fast directions consistent with absolute plate motion across the system. Split times drop in magnitude to 0.5-0.7 seconds within the valley while in the 1-1.5 second range outside the valley. This magnitude decrease suggests a change in mantle signature beneath the fault system, possibly resulting from a small degree of local flow in the asthenosphere either along axis (as may occur with a thinned lithosphere) or by vertical flow (e.g., from delamination or dripping). We are building a 2D tomographic model across the region, relying primarily on teleseismic body waves. The tomography will undoubtedly show variations in crustal structure

  13. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASAs Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing

  14. Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems

    NASA Technical Reports Server (NTRS)

    Fitz, Rhonda

    2017-01-01

    As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this

  15. Tectonic lineations and frictional faulting on a relatively simple body (Ariel)

    NASA Astrophysics Data System (ADS)

    Nyffenegger, Paul; Davis, Dan M.; Consolmagno, Guy J.

    1997-09-01

    Anderson's model of faulting and the Mohr-Coulomb failure criterion can predict the orientations of faults generated in laboratory triaxial compression experiments, but do a much poorer job of explaining the orientations of outcrop- and map-scale faults on Earth. This failure may be due to the structural complexity of the Earth's lithosphere, the failure of laboratory experiments to predict accurately the strength of natural faults, or some fundamental flaw in the model. A simpler environment, such as the lithosphere of an icy satellite, allows us to test whether this model can succeed in less complex settings. A mathematical method is developed to analyze patterns in fracture orientations that can be applied to fractures in the lithospheres of icy satellites. In a initial test of the method, more than 300 lineations on Uranus' satellite Ariel are examined. A nonrandom pattern of lineations is looked for, and the source of the stresses that caused those features and the strength of the material in which they occur are constrained. It is impossible to observe directly the slip on these fractures. However, their orientations are clearly nonrandom and appear to be consistent with Andersonian strike-slip faulting in a relatively weak frictional lithosphere during one or more episodes of tidal flexing.

  16. Fault tectonics and earthquake hazards in parts of southern California. [penninsular ranges, Garlock fault, Salton Trough area, and western Mojave Desert

    NASA Technical Reports Server (NTRS)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.; Gazley, C., Jr.; Lamar, J. V.; Stratton, R. H.

    1976-01-01

    The author has identified the following significant results. Four previously unknown faults were discovered in basement terrane of the Peninsular Ranges. These have been named the San Ysidro Creek fault, Thing Valley fault, Canyon City fault, and Warren Canyon fault. In addition fault gouge and breccia were recognized along the San Diego River fault. Study of features on Skylab imagery and review of geologic and seismic data suggest that the risk of a damaging earthquake is greater along the northwestern portion of the Elsinore fault than along the southeastern portion. Physiographic indicators of active faulting along the Garlock fault identifiable in Skylab imagery include scarps, linear ridges, shutter ridges, faceted ridges, linear valleys, undrained depressions and offset drainage. The following previously unrecognized fault segments are postulated for the Salton Trough Area: (1) An extension of a previously known fault in the San Andreas fault set located southeast of the Salton Sea; (2) An extension of the active San Jacinto fault zone along a tonal change in cultivated fields across Mexicali Valley ( the tonal change may represent different soil conditions along opposite sides of a fault). For the Skylab and LANDSAT images studied, pseudocolor transformations offer no advantages over the original images in the recognition of faults in Skylab and LANDSAT images. Alluvial deposits of different ages, a marble unit and iron oxide gossans of the Mojave Mining District are more readily differentiated on images prepared from ratios of individual bands of the S-192 multispectral scanner data. The San Andreas fault was also made more distinct in the 8/2 and 9/2 band ratios by enhancement of vegetation differences on opposite sides of the fault. Preliminary analysis indicates a significant earth resources potential for the discrimination of soil and rock types, including mineral alteration zones. This application should be actively pursued.

  17. Management approach recommendations. Earth Observatory Satellite system definition study (EOS)

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Management analyses and tradeoffs were performed to determine the most cost effective management approach for the Earth Observatory Satellite (EOS) Phase C/D. The basic objectives of the management approach are identified. Some of the subjects considered are as follows: (1) contract startup phase, (2) project management control system, (3) configuration management, (4) quality control and reliability engineering requirements, and (5) the parts procurement program.

  18. Transforming Water Management: an Emerging Promise of Integrated Earth Observations

    NASA Astrophysics Data System (ADS)

    Lawford, R. G.

    2011-12-01

    Throughout its history, civilization has relied on technology to facilitate many of its advances. New innovations and technologies have often provided strategic advantages that have led to transformations in institutions, economies and ultimately societies. Observational and information technologies are leading to significant developments in the water sector. After a brief introduction tracing the role of observational technologies in the areas of hydrology and water cycle science, this talk explores the existing and potential contributions of remote sensing data in water resource management around the world. In particular, it outlines the steps being undertaken by the Group on Earth Observations (GEO) and its Water Task to facilitate capacity building efforts in water management using Earth Observations in Asia, Africa and Latin and Caribbean America. Success stories on the benefits of using Earth Observations and applying GEO principles are provided. While GEO and its capacity building efforts are contributing to the transformation of water management through interoperability, data sharing, and capacity building, the full potential of these contributions has not been fully realized because impediments and challenges still remain.

  19. A simulation of the San Andreas fault experiment

    NASA Technical Reports Server (NTRS)

    Agreen, R. W.; Smith, D. E.

    1973-01-01

    The San Andreas Fault Experiment, which employs two laser tracking systems for measuring the relative motion of two points on opposite sides of the fault, was simulated for an eight year observation period. The two tracking stations are located near San Diego on the western side of the fault and near Quincy on the eastern side; they are roughly 900 kilometers apart. Both will simultaneously track laser reflector equipped satellites as they pass near the stations. Tracking of the Beacon Explorer C Spacecraft was simulated for these two stations during August and September for eight consecutive years. An error analysis of the recovery of the relative location of Quincy from the data was made, allowing for model errors in the mass of the earth, the gravity field, solar radiation pressure, atmospheric drag, errors in the position of the San Diego site, and laser systems range biases and noise. The results of this simulation indicate that the distance of Quincy from San Diego will be determined each year with a precision of about 10 centimeters. This figure is based on the accuracy of earth models and other parameters available in 1972.

  20. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on

  1. Postglacial rebound and fault instability in Fennoscandia

    NASA Astrophysics Data System (ADS)

    Wu, Patrick; Johnston, Paul; Lambeck, Kurt

    1999-12-01

    The best available rebound model is used to investigate the role that postglacial rebound plays in triggering seismicity in Fennoscandia. The salient features of the model include tectonic stress due to spreading at the North Atlantic Ridge, overburden pressure, gravitationally self-consistent ocean loading, and the realistic deglaciation history and compressible earth model which best fits the sea-level and ice data in Fennoscandia. The model predicts the spatio-temporal evolution of the state of stress, the magnitude of fault instability, the timing of the onset of this instability, and the mode of failure of lateglacial and postglacial seismicity. The consistency of the predictions with the observations suggests that postglacial rebound is probably the cause of the large postglacial thrust faults observed in Fennoscandia. The model also predicts a uniform stress field and instability in central Fennoscandia for the present, with thrust faulting as the predicted mode of failure. However, the lack of spatial correlation of the present seismicity with the region of uplift, and the existence of strike-slip and normal modes of current seismicity are inconsistent with this model. Further unmodelled factors such as the presence of high-angle faults in the central region of uplift along the Baltic coast would be required in order to explain the pattern of seismicity today in terms of postglacial rebound stress. The sensitivity of the model predictions to the effects of compressibility, tectonic stress, viscosity and ice model is also investigated. For sites outside the ice margin, it is found that the mode of failure is sensitive to the presence of tectonic stress and that the onset timing is also dependent on compressibility. For sites within the ice margin, the effect of Earth rheology is shown to be small. However, ice load history is shown to have larger effects on the onset time of earthquakes and the magnitude of fault instability.

  2. Coordinated Fault Tolerance for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  3. Design for interaction between humans and intelligent systems during real-time fault management

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.

    1992-01-01

    Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.

  4. Nearly frictionless faulting by unclamping in long-term interaction models

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.

  5. Length-displacement scaling of thrust faults on the Moon and the formation of uphill-facing scarps

    NASA Astrophysics Data System (ADS)

    Roggon, Lars; Hetzel, Ralf; Hiesinger, Harald; Clark, Jaclyn D.; Hampel, Andrea; van der Bogert, Carolyn H.

    2017-08-01

    Fault populations on terrestrial planets exhibit a linear relationship between their length, L, and the maximum displacement, D, which implies a constant D/L ratio during fault growth. Although it is known that D/L ratios of faults are typically a few percent on Earth and 0.2-0.8% on Mars and Mercury, the D/L ratios of lunar faults are not well characterized. Quantifying the D/L ratios of faults on the Moon is, however, crucial for a better understanding of lunar tectonics, including for studies of the amount of global lunar contraction. Here, we use high-resolution digital terrain models to perform a topographic analysis of four lunar thrust faults - Simpelius-1, Morozov (S1), Fowler, and Racah X-1 - that range in length from 1.3 km to 15.4 km. First, we determine the along-strike variation of the vertical displacement from ≥ 20 topographic profiles across each fault. For measuring the vertical displacements, we use a method that is commonly applied to fault scarps on Earth and that does not require detrending of the profiles. The resulting profiles show that the displacement changes gradually along these faults' strike, with maximum vertical displacements ranging from 17 ± 2 m for Simpelius-1 to 192 ± 30 m for Racah X-1. Assuming a fault dip of 30° yields maximum total displacements (D) that are twice as large as the vertical displacements. The linear relationship between D and L supports the inference that lunar faults gradually accumulate displacement as they propagate laterally. For the faults we investigated, the D/L ratio is ∼2.3%, an order of magnitude higher than theoretical predictions for the Moon, but a value similar for faults on Earth. We also employ finite-element modeling and a Mohr circle stress analysis to investigate why many lunar thrust faults, including three of those studied here, form uphill-facing scarps. Our analysis shows that fault slip is preferentially initiated on planes that dip in the same direction as the topography, because

  6. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  7. An operational, multistate, earth observation data management system

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Hays, T. R.; Hill, C. T.; Ballard, R. J.; Morgan, R. P.; Crnkovich, G. G.; Gohagan, J. K.; Schaeffer, M. A.

    1977-01-01

    The purpose of this paper is to investigate a group of potential users of satellite remotely sensed data - state, local, and regional agencies involved in natural resources management. We assess this group's needs in five states and outline alternative data management systems to serve some of those needs. We conclude that an operational Earth Observation Data Management System (EODMS) will be of most use to these user agencies if it provides a full range of information services - from raw data acquisition to interpretation and dissemination of final information products.

  8. Quantifying Anderson's fault types

    USGS Publications Warehouse

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  9. Persistent Identifiers in Earth science data management environments

    NASA Astrophysics Data System (ADS)

    Weigel, Tobias; Stockhause, Martina; Lautenschlager, Michael

    2014-05-01

    Globally resolvable Persistent Identifiers (PIDs) that carry additional context information (which can be any form of metadata) are increasingly used by data management infrastructures for fundamental tasks. The notion of a Persistent Identifier is originally an abstract concept that aims to provide identifiers that are quality-controlled and maintained beyond the life time of the original issuer, for example through the use of redirection mechanisms. Popular implementations of the PID concept are for example the Handle System and the DOI System based on it. These systems also move beyond the simple identification concept by providing facilities that can hold additional context information. Not only in the Earth sciences, data managers are increasingly attracted to PIDs because of the opportunities these facilities provide; however, long-term viable principles and mechanisms for efficient organization of PIDs and context information are not yet available or well established. In this respect, promising techniques are to type the information that is associated with PIDs and to construct actionable collections of PIDs. There are two main drivers for extended PID usage: Earth science data management middleware use cases and applications geared towards scientific end-users. Motivating scenarios from data management include hierarchical data and metadata management, consistent data tracking and improvements in the accountability of processes. If PIDs are consistently assigned to data objects, context information can be carried over to subsequent data life cycle stages much easier. This can also ease data migration from one major curation domain to another, e.g. from early dissemination within research communities to formal publication and long-term archival stages, and it can help to document processes across technical and organizational boundaries. For scientific end users, application scenarios include for example more personalized data citation and improvements in the

  10. Functional Fault Modeling Conventions and Practices for Real-Time Fault Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the conventions, best practices, and processes that were established based on the prototype development of a Functional Fault Model (FFM) for a Cryogenic System that would be used for real-time Fault Isolation in a Fault Detection, Isolation, and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using a suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FFMs were created offline but would eventually be used by a real-time reasoner to isolate faults in a Cryogenic System. Through their development and review, a set of modeling conventions and best practices were established. The prototype FFM development also provided a pathfinder for future FFM development processes. This paper documents the rationale and considerations for robust FFMs that can easily be transitioned to a real-time operating environment.

  11. The stress shadow effect: a mechanical analysis of the evenly-spaced parallel strike-slip faults in the San Andreas fault system

    NASA Astrophysics Data System (ADS)

    Zuza, A. V.; Yin, A.; Lin, J. C.

    2015-12-01

    -slip fault systems, both on Earth and throughout the solar system (e.g., the Tiger Stripe Fractures on Enceladus).

  12. Deconvoluting complex structural histories archived in brittle fault zones

    NASA Astrophysics Data System (ADS)

    Viola, G.; Scheiber, T.; Fredin, O.; Zwingmann, H.; Margreth, A.; Knies, J.

    2016-11-01

    Brittle deformation can saturate the Earth's crust with faults and fractures in an apparently chaotic fashion. The details of brittle deformational histories and implications on, for example, seismotectonics and landscape, can thus be difficult to untangle. Fortunately, brittle faults archive subtle details of the stress and physical/chemical conditions at the time of initial strain localization and eventual subsequent slip(s). Hence, reading those archives offers the possibility to deconvolute protracted brittle deformation. Here we report K-Ar isotopic dating of synkinematic/authigenic illite coupled with structural analysis to illustrate an innovative approach to the high-resolution deconvolution of brittle faulting and fluid-driven alteration of a reactivated fault in western Norway. Permian extension preceded coaxial reactivation in the Jurassic and Early Cretaceous fluid-related alteration with pervasive clay authigenesis. This approach represents important progress towards time-constrained structural models, where illite characterization and K-Ar analysis are a fundamental tool to date faulting and alteration in crystalline rocks.

  13. BioEarth: Envisioning and developing a new regional earth system model to inform natural and agricultural resource management

    DOE PAGES

    Adam, Jennifer C.; Stephens, Jennie C.; Chung, Serena H.; ...

    2014-04-24

    Uncertainties in global change impacts, the complexities associated with the interconnected cycling of nitrogen, carbon, and water present daunting management challenges. Existing models provide detailed information on specific sub-systems (e.g., land, air, water, and economics). An increasing awareness of the unintended consequences of management decisions resulting from interconnectedness of these sub-systems, however, necessitates coupled regional earth system models (EaSMs). Decision makers’ needs and priorities can be integrated into the model design and development processes to enhance decision-making relevance and “usability” of EaSMs. BioEarth is a research initiative currently under development with a focus on the U.S. Pacific Northwest region thatmore » explores the coupling of multiple stand-alone EaSMs to generate usable information for resource decision-making. Direct engagement between model developers and non-academic stakeholders involved in resource and environmental management decisions throughout the model development process is a critical component of this effort. BioEarth utilizes a bottom-up approach for its land surface model that preserves fine spatial-scale sensitivities and lateral hydrologic connectivity, which makes it unique among many regional EaSMs. Here, we describe the BioEarth initiative and highlights opportunities and challenges associated with coupling multiple stand-alone models to generate usable information for agricultural and natural resource decision-making.« less

  14. Application of NASA management approach to solve complex problems on earth

    NASA Technical Reports Server (NTRS)

    Potate, J. S.

    1972-01-01

    The application of NASA management approach to solving complex problems on earth is discussed. The management of the Apollo program is presented as an example of effective management techniques. Four key elements of effective management are analyzed. Photographs of the Cape Kennedy launch sites and supporting equipment are included to support the discussions.

  15. Shaded Relief with Height as Color, Kunlun fault, east-central Tibet

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These two images show exactly the same area, part of the Kunlun fault in northern Tibet. The image on the left was created using the best global topographic data set previously available, the U.S. Geological Survey's GTOPO30. In contrast, the much more detailed image on the right was generated with data from the Shuttle Radar Topography Mission, which collected enough measurements to map 80 percent of Earth's landmass at this level of precision.

    The area covered is the western part of the Kunlun fault, at the north edge of east-central Tibet. The sharp line marking the southern edge of the mountains, running left to right across the scene represents s strike-slip fault, much like California's San Andreas Fault, which is more than 1,000 kilometers (621 miles) long. The most recent earthquake on the Kunlun fault occurred on November 14, 2001. At a magnitude of 8.1, it produced a surface break over 350 kilometers (217 miles) long. Preliminary reports indicate a maximum offset of 7 meters (23 feet) in the central section of the break. This five-kilometer (three mile) high area is uninhabited by humans, so there was little damage reported, despite the large magnitude. Shuttle Radar Topography Mission maps of active faults in Tibet and other parts of the world provide geologists with a unique tool for determining how active a fault is and the probability of future large earthquakes on the fault. This is done by both measuring offsets in topographic features and using the SRTM digital map as a baseline for processing data from orbiting satellites using the techniques of radar interferometry. Based on geologic evidence, the Kunlun fault's long-term slip rate is believed to be about 11 millimeters per year (0.4 inches per year). The Kunlun fault and the Altyn Tagh fault, 400 kilometers (249 miles) to the north, are two major faults that help accommodate the ongoing collision between the Indian and Asian tectonic plates.

    In contrast with the wealth of detail

  16. Earth-Mars Telecommunications and Information Management System (TIMS): Antenna Visibility Determination, Network Simulation, and Management Models

    NASA Technical Reports Server (NTRS)

    Odubiyi, Jide; Kocur, David; Pino, Nino; Chu, Don

    1996-01-01

    This report presents the results of our research on Earth-Mars Telecommunications and Information Management System (TIMS) network modeling and unattended network operations. The primary focus of our research is to investigate the feasibility of the TIMS architecture, which links the Earth-based Mars Operations Control Center, Science Data Processing Facility, Mars Network Management Center, and the Deep Space Network of antennae to the relay satellites and other communication network elements based in the Mars region. The investigation was enhanced by developing Build 3 of the TIMS network modeling and simulation model. The results of several 'what-if' scenarios are reported along with reports on upgraded antenna visibility determination software and unattended network management prototype.

  17. Intelligent fault diagnosis and failure management of flight control actuation systems

    NASA Technical Reports Server (NTRS)

    Bonnice, William F.; Baker, Walter

    1988-01-01

    The real-time fault diagnosis and failure management (FDFM) of current operational and experimental dual tandem aircraft flight control system actuators was investigated. Dual tandem actuators were studied because of the active FDFM capability required to manage the redundancy of these actuators. The FDFM methods used on current dual tandem actuators were determined by examining six specific actuators. The FDFM capability on these six actuators was also evaluated. One approach for improving the FDFM capability on dual tandem actuators may be through the application of artificial intelligence (AI) technology. Existing AI approaches and applications of FDFM were examined and evaluated. Based on the general survey of AI FDFM approaches, the potential role of AI technology for real-time actuator FDFM was determined. Finally, FDFM and maintainability improvements for dual tandem actuators were recommended.

  18. An architecture for automated fault diagnosis. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry R.

    1989-01-01

    A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.

  19. The Bear River Fault Zone, Wyoming and Utah: Complex Ruptures on a Young Normal Fault

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Hecker, S.; Haproff, P.; Beukelman, G.; Erickson, B.

    2012-12-01

    The Bear River fault zone (BRFZ), a set of normal fault scarps located in the Rocky Mountains at the eastern margin of Basin and Range extension, is a rare example of a nascent surface-rupturing fault. Paleoseismic investigations (West, 1994; this study) indicate that the entire neotectonic history of the BRFZ may consist of two large surface-faulting events in the late Holocene. We have estimated a maximum per-event vertical displacement of 6-6.5 m at the south end of the fault where it abuts the north flank of the east-west-trending Uinta Mountains. However, large hanging-wall depressions resulting from back rotation, which front scarps that locally exceed 15 m in height, are prevalent along the main trace, obscuring the net displacement and its along-strike distribution. The modest length (~35 km) of the BRFZ indicates ruptures with a large displacement-to-length ratio, which implies earthquakes with a high static stress drop. The BRFZ is one of several immature (low cumulative displacement) normal faults in the Rocky Mountain region that appear to produce high-stress drop earthquakes. West (1992) interpreted the BRFZ as an extensionally reactivated ramp of the late Cretaceous-early Tertiary Hogsback thrust. LiDAR data on the southern section of the fault and Google Earth imagery show that these young ruptures are more extensive than currently mapped, with newly identified large (>10m) antithetic scarps and footwall graben. The scarps of the BRFZ extend across a 2.5-5.0 km-wide zone, making this the widest and most complex Holocene surface rupture in the Intermountain West. The broad distribution of Late Holocene scarps is consistent with reactivation of shallow bedrock structures but the overall geometry of the BRFZ at depth and its extent into the seismogenic zone are uncertain.

  20. Automatic Fault Characterization via Abnormality-Enhanced Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Laguna, I; de Supinski, B R

    Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less

  1. Dream project: Applications of earth observations to disaster risk management

    NASA Astrophysics Data System (ADS)

    Dyke, G.; Gill, S.; Davies, R.; Betorz, F.; Andalsvik, Y.; Cackler, J.; Dos Santos, W.; Dunlop, K.; Ferreira, I.; Kebe, F.; Lamboglia, E.; Matsubara, Y.; Nikolaidis, V.; Ostoja-Starzewski, S.; Sakita, M.; Verstappen, N.

    2011-01-01

    The field of disaster risk management is relatively new and takes a structured approach to managing uncertainty related to the threat of natural and man-made disasters. Disaster risk management consists primarily of risk assessment and the development of strategies to mitigate disaster risk. This paper will discuss how increasing both Earth observation data and information technology capabilities can contribute to disaster risk management, particularly in Belize. The paper presents the results and recommendations of a project conducted by an international and interdisciplinary team of experts at the 2009 session of the International Space University in NASA Ames Research Center (California, USA). The aim is to explore the combination of current, planned and potential space-aided, airborne, and ground-based Earth observation tools, the emergence of powerful new web-based and mobile data management tools, and how this combination can support and improve the emerging field of disaster risk management. The starting point of the project was the World Bank's Comprehensive Approach to Probabilistic Risk Assessment (CAPRA) program, focused in Central America. This program was used as a test bed to analyze current space technologies used in risk management and develop new strategies and tools to be applied in other regions around the world.

  2. Teaching earth science

    USGS Publications Warehouse

    Alpha, Tau Rho; Diggles, Michael F.

    1998-01-01

    This CD-ROM contains 17 teaching tools: 16 interactive HyperCard 'stacks' and a printable model. They are separated into the following categories: Geologic Processes, Earthquakes and Faulting, and Map Projections and Globes. A 'navigation' stack, Earth Science, is provided as a 'launching' place from which to access all of the other stacks. You can also open the HyperCard Stacks folder and launch any of the 16 stacks yourself. In addition, a 17th tool, Earth and Tectonic Globes, is provided as a printable document. Each of the tools can be copied onto a 1.4-MB floppy disk and distributed freely.

  3. High level organizing principles for display of systems fault information for commercial flight crews

    NASA Technical Reports Server (NTRS)

    Rogers, William H.; Schutte, Paul C.

    1993-01-01

    Advanced fault management aiding concepts for commercial pilots are being developed in a research program at NASA Langley Research Center. One aim of this program is to re-evaluate current design principles for display of fault information to the flight crew: (1) from a cognitive engineering perspective and (2) in light of the availability of new types of information generated by advanced fault management aids. The study described in this paper specifically addresses principles for organizing fault information for display to pilots based on their mental models of fault management.

  4. Detection of postseismic fault-zone collapse following the Landers earthquake

    USGS Publications Warehouse

    Massonnet, D.; Thatcher, W.; Vadon, H.

    1996-01-01

    Stress changes caused by fault movement in an earthquake induce transient aseismic crustal movements in the earthquake source region that continue for months to decades following large events. These motions reflect aseismic adjustments of the fault zone and/or bulk deformation of the surroundings in response to applied stresses, and supply information regarding the inelastic behaviour of the Earth's crust. These processes are imperfectly understood because it is difficult to infer what occurs at depth using only surface measurements, which are in general poorly sampled. Here we push satellite radar interferometry to near its typical artefact level, to obtain a map of the postseismic deformation field in the three years following the 28 June 1992 Landers, California earthquake. From the map, we deduce two distinct types of deformation: afterslip at depth on the fault that ruptured in the earthquake, and shortening normal to the fault zone. The latter movement may reflect the closure of dilatant cracks and fluid expulsion from a transiently over-pressured fault zone.

  5. An Information Architect's View of Earth Observations for Disaster Risk Management

    NASA Astrophysics Data System (ADS)

    Moe, K.; Evans, J. D.; Cappelaere, P. G.; Frye, S. W.; Mandl, D.; Dobbs, K. E.

    2014-12-01

    Satellite observations play a significant role in supporting disaster response and risk management, however data complexity is a barrier to broader use especially by the public. In December 2013 the Committee on Earth Observation Satellites Working Group on Information Systems and Services documented a high-level reference model for the use of Earth observation satellites and associated products to support disaster risk management within the Global Earth Observation System of Systems context. The enterprise architecture identified the important role of user access to all key functions supporting situational awareness and decision-making. This paper focuses on the need to develop actionable information products from these Earth observations to simplify the discovery, access and use of tailored products. To this end, our team has developed an Open GeoSocial API proof-of-concept for GEOSS. We envision public access to mobile apps available on smart phones using common browsers where users can set up a profile and specify a region of interest for monitoring events such as floods and landslides. Information about susceptibility and weather forecasts about flood risks can be accessed. Users can generate geo-located information and photos of local events, and these can be shared on social media. The information architecture can address usability challenges to transform sensor data into actionable information, based on the terminology of the emergency management community responsible for informing the public. This paper describes the approach to collecting relevant material from the disasters and risk management community to address the end user needs for information. The resulting information architecture addresses the structural design of the shared information in the disasters and risk management enterprise. Key challenges are organizing and labeling information to support both online user communities and machine-to-machine processing for automated product generation.

  6. Orbital debris and near-Earth environmental management: A chronology

    NASA Technical Reports Server (NTRS)

    Portree, David S. F.; Loftus, Joseph P., Jr.

    1993-01-01

    This chronology covers the 32-year history of orbital debris and near-Earth environmental concerns. It tracks near-Earth environmental hazard creation, research, observation, experimentation, management, mitigation, protection, and policy-making, with emphasis on the orbital debris problem. Included are the Project West Ford experiments; Soviet ASAT tests and U.S. Delta upper stage explosions; the Ariane V16 explosion, U.N. treaties pertinent to near-Earth environmental problems, the PARCS tests; space nuclear power issues, the SPS/orbital debris link; Space Shuttle and space station orbital debris issues; the Solwind ASAT test; milestones in theory and modeling the Cosmos 954, Salyut 7, and Skylab reentries; the orbital debris/meteoroid research link; detection system development; orbital debris shielding development; popular culture and orbital debris; Solar Max results; LDEF results; orbital debris issues peculiar to geosynchronous orbit, including reboost policies and the stable plane; seminal papers, reports, and studies; the increasing effects of space activities on astronomy; and growing international awareness of the near-Earth environment.

  7. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  8. Block rotations, fault domains and crustal deformation in the western US

    NASA Technical Reports Server (NTRS)

    Nur, Amos

    1990-01-01

    The aim of the project was to develop a 3D model of crustal deformation by distributed fault sets and to test the model results in the field. In the first part of the project, Nur's 2D model (1986) was generalized to 3D. In Nur's model the frictional strength of rocks and faults of a domain provides a tight constraint on the amount of rotation that a fault set can undergo during block rotation. Domains of fault sets are commonly found in regions where the deformation is distributed across a region. The interaction of each fault set causes the fault bounded blocks to rotate. The work that has been done towards quantifying the rotation of fault sets in a 3D stress field is briefly summarized. In the second part of the project, field studies were carried out in Israel, Nevada and China. These studies combined both paleomagnetic and structural information necessary to test the block rotation model results. In accordance with the model, field studies demonstrate that faults and attending fault bounded blocks slip and rotate away from the direction of maximum compression when deformation is distributed across fault sets. Slip and rotation of fault sets may continue as long as the earth's crustal strength is not exceeded. More optimally oriented faults must form, for subsequent deformation to occur. Eventually the block rotation mechanism may create a complex pattern of intersecting generations of faults.

  9. Exploring Best Practices for Research Data Management in Earth Science through Collaborating with University Libraries

    NASA Astrophysics Data System (ADS)

    Wang, T.; Branch, B. D.

    2013-12-01

    Earth Science research data, its data management, informatics processing and its data curation are valuable in allowing earth scientists to make new discoveries. But how to actively manage these research assets to ensure them safe and secure, accessible and reusable for long term is a big challenge. Nowadays, the data deluge makes this challenge become even more difficult. To address the growing demand for managing earth science data, the Council on Library and Information Resources (CLIR) partners with the Library and Technology Services (LTS) of Lehigh University and Purdue University Libraries (PUL) on hosting postdoctoral fellows in data curation activity. This inter-disciplinary fellowship program funded by the SLOAN Foundation innovatively connects university libraries and earth science departments and provides earth science Ph.D.'s opportunities to use their research experiences in earth science and data curation trainings received during their fellowship to explore best practices for research data management in earth science. In the process of exploring best practices for data curation in earth science, the CLIR Data Curation Fellows have accumulated rich experiences and insights on the data management behaviors and needs of earth scientists. Specifically, Ting Wang, the postdoctoral fellow at Lehigh University has worked together with the LTS support team for the College of Arts and Sciences, Web Specialists and the High Performance Computing Team, to assess and meet the data management needs of researchers at the Department of Earth and Environmental Sciences (EES). By interviewing the faculty members and graduate students at EES, the fellow has identified a variety of data-related challenges at different research fields of earth science, such as climate, ecology, geochemistry, geomorphology, etc. The investigation findings of the fellow also support the LTS for developing campus infrastructure for long-term data management in the sciences. Likewise

  10. Lessons Learned in the Livingstone 2 on Earth Observing One Flight Experiment

    NASA Technical Reports Server (NTRS)

    Hayden, Sandra C.; Sweet, Adam J.; Shulman, Seth

    2005-01-01

    The Livingstone 2 (L2) model-based diagnosis software is a reusable diagnostic tool for monitoring complex systems. In 2004, L2 was integrated with the JPL Autonomous Sciencecraft Experiment (ASE) and deployed on-board Goddard's Earth Observing One (EO-1) remote sensing satellite, to monitor and diagnose the EO-1 space science instruments and imaging sequence. This paper reports on lessons learned from this flight experiment. The goals for this experiment, including validation of minimum success criteria and of a series of diagnostic scenarios, have all been successfully net. Long-term operations in space are on-going, as a test of the maturity of the system, with L2 performance remaining flawless. L2 has demonstrated the ability to track the state of the system during nominal operations, detect simulated abnormalities in operations and isolate failures to their root cause fault. Specific advances demonstrated include diagnosis of ambiguity groups rather than a single fault candidate; hypothesis revision given new sensor evidence about the state of the system; and the capability to check for faults in a dynamic system without having to wait until the system is quiescent. The major benefits of this advanced health management technology are to increase mission duration and reliability through intelligent fault protection, and robust autonomous operations with reduced dependency on supervisory operations from Earth. The work-load for operators will be reduced by telemetry of processed state-of-health information rather than raw data. The long-term vision is that of making diagnosis available to the onboard planner or executive, allowing autonomy software to re-plan in order to work around known component failures. For a system that is expected to evolve substantially over its lifetime, as for the International Space Station, the model-based approach has definite advantages over rule-based expert systems and limit-checking fault protection systems, as these do not

  11. Ste. Genevieve Fault Zone, Missouri and Illinois. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.J.; Lumm, D.K.

    1985-07-01

    The Ste. Genevieve Fault Zone is a major structural feature which strikes NW-SE for about 190 km on the NE flank of the Ozark Dome. There is up to 900 m of vertical displacement on high angle normal and reverse faults in the fault zone. At both ends the Ste. Genevieve Fault Zone dies out into a monocline. Two periods of faulting occurred. The first was in late Middle Devonian time and the second from latest Mississippian through early Pennsylvanian time, with possible minor post-Pennsylvanian movement. No evidence was found to support the hypothesis that the Ste. Genevieve Fault Zonemore » is part of a northwestward extension of the late Precambrian-early Cambrian Reelfoot Rift. The magnetic and gravity anomalies cited in support of the ''St. Louis arm'' of the Reelfoot Rift possible reflect deep crystal features underlying and older than the volcanic terrain of the St. Francois Mountains (1.2 to 1.5 billion years old). In regard to neotectonics no displacements of Quaternary sediments have been detected, but small earthquakes occur from time to time along the Ste. Genevieve Fault Zone. Many faults in the zone appear capable of slipping under the current stress regime of east-northeast to west-southwest horizontal compression. We conclude that the zone may continue to experience small earth movements, but catastrophic quakes similar to those at New Madrid in 1811-12 are unlikely. 32 figs., 1 tab.« less

  12. Recently Active Traces of the Berryessa Fault, California: A Digital Database

    USGS Publications Warehouse

    Lienkaemper, James J.

    2012-01-01

    The purpose of this map is to show the location of and evidence for recent movement on active fault traces within the Berryessa section and parts of adjacent sections of the Green Valley Fault Zone, California. The location and recency of the mapped traces is primarily based on geomorphic expression of the fault as interpreted from large-scale 2010 aerial photography and from 2007 and 2011 0.5 and 1.0 meter bare-earth LiDAR imagery (that is, high-resolution topographic data). In a few places, evidence of fault creep and offset Holocene strata in trenches and natural exposures have confirmed the activity of some of these traces. This publication is formatted both as a digital database for use within a geographic information system (GIS) and for broader public access as map images that may be browsed on-line or download a summary map. The report text describes the types of scientific observations used to make the map, gives references pertaining to the fault and the evidence of faulting, and provides guidance for use of and limitations of the map.

  13. Advanced information processing system: The Army Fault-Tolerant Architecture detailed design overview

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Babikyan, Carol A.; Butler, Bryan P.; Clasen, Robert J.; Harris, Chris H.; Lala, Jaynarayan H.; Masotto, Thomas K.; Nagle, Gail A.; Prizant, Mark J.; Treadwell, Steven

    1994-01-01

    The Army Avionics Research and Development Activity (AVRADA) is pursuing programs that would enable effective and efficient management of large amounts of situational data that occurs during tactical rotorcraft missions. The Computer Aided Low Altitude Night Helicopter Flight Program has identified automated Terrain Following/Terrain Avoidance, Nap of the Earth (TF/TA, NOE) operation as key enabling technology for advanced tactical rotorcraft to enhance mission survivability and mission effectiveness. The processing of critical information at low altitudes with short reaction times is life-critical and mission-critical necessitating an ultra-reliable/high throughput computing platform for dependable service for flight control, fusion of sensor data, route planning, near-field/far-field navigation, and obstacle avoidance operations. To address these needs the Army Fault Tolerant Architecture (AFTA) is being designed and developed. This computer system is based upon the Fault Tolerant Parallel Processor (FTPP) developed by Charles Stark Draper Labs (CSDL). AFTA is hard real-time, Byzantine, fault-tolerant parallel processor which is programmed in the ADA language. This document describes the results of the Detailed Design (Phase 2 and 3 of a 3-year project) of the AFTA development. This document contains detailed descriptions of the program objectives, the TF/TA NOE application requirements, architecture, hardware design, operating systems design, systems performance measurements and analytical models.

  14. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

  15. The permeability of fault zones in the upper continental crust: statistical analysis from 460 datasets, updated depth-trends, and permeability contrasts between fault damage zones and protoliths.

    NASA Astrophysics Data System (ADS)

    Scibek, J.; Gleeson, T. P.; Ingebritsen, S.; McKenzie, J. M.

    2017-12-01

    Fault zones are an important part of the hydraulic structure of the Earth's crust and influence a wide range of Earth processes and a large amount of test data has been collected over the years. We conducted a meta-analysis of global of fault zone permeabilities in the upper brittle continental crust, using about 10,000 published research items from a variety of geoscience and engineering disciplines. Using 460 datasets at 340 localities, the in-situ bulk permeabilities (>10's meters scale, including macro-fractures) and matrix permeabilities (drilled core samples or outcrop spot tests) are separated, analyzed, and compared. The values have log-normal distributions and we analyze the log-permeability values. In the fault damage zones of plutonic and metamorphic rocks the mean bulk permeability was 1x10-14m2, compared to matrix mean of 1x10-16m2. In sedimentary siliciclastic rocks the mean value was the same for bulk and matrix permeability (4x10-14m2). More useful insights were determined from the regression analysis of paired permeability data at all sites (fault damage zone vs. protolith). Much of the variation in fault permeability is explained by the permeability of protolith: in relatively weak volcaniclastic and clay-rich rocks up to 70 to 88% of the variation is explained, and only 20-30% in plutonic and metamorphic rocks. We propose a revision at shallow depths for previously published upper-bound curves for the "fault-damaged crust " and the geothermal-metamorphic rock assemblage outside of major fault zones. Although the bounding curves describe the "fault-damaged crust" permeability parameter space adequately, the only statistically significant permeability-depth trend is for plutonic and metamorphic rocks (50% of variation explained). We find a depth-dependent systematic variation of the permeability ratio (fault damage zone / protolith) from the in-situ bulk permeability global data. A moving average of the log-permeability ratio value is 2 to 2

  16. Detection of postseismic fault-zone collapse following the Landers earthquake

    NASA Astrophysics Data System (ADS)

    Massonnet, Didier; Thatcher, Wayne; Vadon, Hélèna

    1996-08-01

    STRESS changes caused by fault movement in an earthquake induce transient aseismic crustal movements in the earthquake source region that continue for months to decades following large events1-4. These motions reflect aseismic adjustments of the fault zone and/or bulk deformation of the surroundings in response to applied stresses2,5-7, and supply information regarding the inelastic behaviour of the Earth's crust. These processes are imperfectly understood because it is difficult to infer what occurs at depth using only surface measurements2, which are in general poorly sampled. Here we push satellite radar interferometry to near its typical artefact level, to obtain a map of the postseismic deformation field in the three years following the 28 June 1992 Landers, California earthquake. From the map, we deduce two distinct types of deformation: afterslip at depth on the fault that ruptured in the earthquake, and shortening normal to the fault zone. The latter movement may reflect the closure of dilatant cracks and fluid expulsion from a transiently over-pressured fault zone6-8.

  17. Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Ren, C. X.; Bolton, D. C.; Marone, C.; Johnson, P. A.

    2017-12-01

    Fault friction controls nearly all aspects of fault rupture, yet it is only possible to measure in the laboratory. Here we describe laboratory experiments where acoustic emissions are recorded from the fault. We find that by applying a machine learning approach known as "extreme gradient boosting trees" to the continuous acoustical signal, the fault friction can be directly inferred, showing that instantaneous characteristics of the acoustic signal are a fingerprint of the frictional state. This machine learning-based inference leads to a simple law that links the acoustic signal to the friction state, and holds for every stress cycle the laboratory fault goes through. The approach does not use any other measured parameter than instantaneous statistics of the acoustic signal. This finding may have importance for inferring frictional characteristics from seismic waves in Earth where fault friction cannot be measured.

  18. Geophysical Characterization of the Hilton Creek Fault System

    NASA Astrophysics Data System (ADS)

    Lacy, A. K.; Macy, K. P.; De Cristofaro, J. L.; Polet, J.

    2016-12-01

    The Long Valley Caldera straddles the eastern edge of the Sierra Nevada Batholith and the western edge of the Basin and Range Province, and represents one of the largest caldera complexes on Earth. The caldera is intersected by numerous fault systems, including the Hartley Springs Fault System, the Round Valley Fault System, the Long Valley Ring Fault System, and the Hilton Creek Fault System, which is our main region of interest. The Hilton Creek Fault System appears as a single NW-striking fault, dipping to the NE, from Davis Lake in the south to the southern rim of the Long Valley Caldera. Inside the caldera, it splays into numerous parallel faults that extend toward the resurgent dome. Seismicity in the area increased significantly in May 1980, following a series of large earthquakes in the vicinity of the caldera and a subsequent large earthquake swarm which has been suggested to be the result of magma migration. A large portion of the earthquake swarms in the Long Valley Caldera occurs on or around the Hilton Creek Fault splays. We are conducting an interdisciplinary geophysical study of the Hilton Creek Fault System from just south of the onset of splay faulting, to its extension into the dome of the caldera. Our investigation includes ground-based magnetic field measurements, high-resolution total station elevation profiles, Structure-From-Motion derived topography and an analysis of earthquake focal mechanisms and statistics. Preliminary analysis of topographic profiles, of approximately 1 km in length, reveals the presence of at least three distinct fault splays within the caldera with vertical offsets of 0.5 to 1.0 meters. More detailed topographic mapping is expected to highlight smaller structures. We are also generating maps of the variation in b-value along different portions of the Hilton Creek system to determine whether we can detect any transition to more swarm-like behavior towards the North. We will show maps of magnetic anomalies, topography

  19. Management of space networks

    NASA Technical Reports Server (NTRS)

    Markley, R. W.; Williams, B. F.

    1993-01-01

    NASA has proposed missions to the Moon and Mars that reflect three areas of emphasis: human presence, exploration, and space resource development for the benefit of Earth. A major requirement for such missions is a robust and reliable communications architecture. Network management--the ability to maintain some degree of human and automatic control over the span of the network from the space elements to the end users on Earth--is required to realize such robust and reliable communications. This article addresses several of the architectural issues associated with space network management. Round-trip delays, such as the 5- to 40-min delays in the Mars case, introduce a host of problems that must be solved by delegating significant control authority to remote nodes. Therefore, management hierarchy is one of the important architectural issues. The following article addresses these concerns, and proposes a network management approach based on emerging standards that covers the needs for fault, configuration, and performance management, delegated control authority, and hierarchical reporting of events. A relatively simple approach based on standards was demonstrated in the DSN 2000 Information Systems Laboratory, and the results are described.

  20. On the management and processing of earth resources information

    NASA Technical Reports Server (NTRS)

    Skinner, C. W.; Gonzalez, R. C.

    1973-01-01

    The basic concepts of a recently completed large-scale earth resources information system plan are reported. Attention is focused throughout the paper on the information management and processing requirements. After the development of the principal system concepts, a model system for implementation at the state level is discussed.

  1. Earth Observations

    NASA Image and Video Library

    2011-05-28

    ISS028-E-006059 (28 May 2011) --- One of the Expedition 28 crew members, photographing Earth images onboard the International Space Station while docked with the space shuttle Endeavour and flying at an altitude of just under 220 miles, captured this frame of the Salton Sea. The body of water, easily identifiable from low orbit spacecraft, is a saline, endorheic rift lake located directly on the San Andreas Fault. The agricultural area is within the Coachella Valley.

  2. Fault tree analysis: NiH2 aerospace cells for LEO mission

    NASA Technical Reports Server (NTRS)

    Klein, Glenn C.; Rash, Donald E., Jr.

    1992-01-01

    The Fault Tree Analysis (FTA) is one of several reliability analyses or assessments applied to battery cells to be utilized in typical Electric Power Subsystems for spacecraft in low Earth orbit missions. FTA is generally the process of reviewing and analytically examining a system or equipment in such a way as to emphasize the lower level fault occurrences which directly or indirectly contribute to the major fault or top level event. This qualitative FTA addresses the potential of occurrence for five specific top level events: hydrogen leakage through either discrete leakage paths or through pressure vessel rupture; and four distinct modes of performance degradation - high charge voltage, suppressed discharge voltage, loss of capacity, and high pressure.

  3. Development of the Global Earthquake Model’s neotectonic fault database

    USGS Publications Warehouse

    Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert

    2015-01-01

    The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.

  4. The Australian Computational Earth Systems Simulator

    NASA Astrophysics Data System (ADS)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic

  5. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  6. Low-Power Fault Tolerance for Spacecraft FPGA-Based Numerical Computing

    DTIC Science & Technology

    2006-09-01

    Ranganathan , “Power Management – Guest Lecture for CS4135, NPS,” Naval Postgraduate School, Nov 2004 [32] R. L. Phelps, “Operational Experiences with the...4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2...undesirable, are not necessarily harmful. Our intent is to prevent errors by properly managing faults. This research focuses on developing fault-tolerant

  7. Integrating emerging earth science technologies into disaster risk management: an enterprise architecture approach

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S. R.

    2014-12-01

    Disaster risk management has grown to rely on earth observations, multi-source data analysis, numerical modeling, and interagency information sharing. The practice and outcomes of disaster risk management will likely undergo further change as several emerging earth science technologies come of age: mobile devices; location-based services; ubiquitous sensors; drones; small satellites; satellite direct readout; Big Data analytics; cloud computing; Web services for predictive modeling, semantic reconciliation, and collaboration; and many others. Integrating these new technologies well requires developing and adapting them to meet current needs; but also rethinking current practice to draw on new capabilities to reach additional objectives. This requires a holistic view of the disaster risk management enterprise and of the analytical or operational capabilities afforded by these technologies. One helpful tool for this assessment, the GEOSS Architecture for the Use of Remote Sensing Products in Disaster Management and Risk Assessment (Evans & Moe, 2013), considers all phases of the disaster risk management lifecycle for a comprehensive set of natural hazard types, and outlines common clusters of activities and their use of information and computation resources. We are using these architectural views, together with insights from current practice, to highlight effective, interrelated roles for emerging earth science technologies in disaster risk management. These roles may be helpful in creating roadmaps for research and development investment at national and international levels.

  8. Anatomy of landslides along the Dead Sea Transform Fault System in NW Jordan

    NASA Astrophysics Data System (ADS)

    Dill, H. G.; Hahne, K.; Shaqour, F.

    2012-03-01

    In the mountainous region north of Amman, Jordan, Cenomanian calcareous rocks are being monitored constantly for their mass wasting processes which occasionally cause severe damage to the Amman-Irbid Highway. Satellite remote sensing data (Landsat TM, ASTER, and SRTM) and ground measurements are applied to investigate the anatomy of landslides along the Dead Sea Transform Fault System (DSTFS), a prominent strike-slip fault. The joints and faults pertinent to the DSTFS match the architectural elements identified in landslides of different size. This similarity attests to a close genetic relation between the tectonic setting of one of the most prominent fault zones on the earth and modern geomorphologic processes. Six indicators stand out in particular: 1) The fractures developing in N-S and splay faults represent the N-S lateral movement of the DSTFS. They governed the position of the landslides. 2) Cracks and faults aligned in NE-SW to NNW-SSW were caused by compressional strength. They were subsequently reactivated during extensional processes and used in some cases as slip planes during mass wasting. 3) Minor landslides with NE-SW straight scarps were derived from compressional features which were turned into slip planes during the incipient stages of mass wasting. They occur mainly along the slopes in small wadis or where a wide wadi narrows upstream. 4) Major landslides with curved instead of straight scarps and rotational slides are representative of a more advanced level of mass wasting. These areas have to be marked in the maps and during land management projects as high-risk area mainly and may be encountered in large wadis with steep slopes or longitudinal slopes undercut by road construction works. 5) The spatial relation between minor faults and slope angle is crucial as to the vulnerability of the areas in terms of mass wasting. 6) Springs lined up along faults cause serious problems to engineering geology in that they step up the behavior of marly

  9. Interacting faults

    NASA Astrophysics Data System (ADS)

    Peacock, D. C. P.; Nixon, C. W.; Rotevatn, A.; Sanderson, D. J.; Zuluaga, L. F.

    2017-04-01

    The way that faults interact with each other controls fault geometries, displacements and strains. Faults rarely occur individually but as sets or networks, with the arrangement of these faults producing a variety of different fault interactions. Fault interactions are characterised in terms of the following: 1) Geometry - the spatial arrangement of the faults. Interacting faults may or may not be geometrically linked (i.e. physically connected), when fault planes share an intersection line. 2) Kinematics - the displacement distributions of the interacting faults and whether the displacement directions are parallel, perpendicular or oblique to the intersection line. Interacting faults may or may not be kinematically linked, where the displacements, stresses and strains of one fault influences those of the other. 3) Displacement and strain in the interaction zone - whether the faults have the same or opposite displacement directions, and if extension or contraction dominates in the acute bisector between the faults. 4) Chronology - the relative ages of the faults. This characterisation scheme is used to suggest a classification for interacting faults. Different types of interaction are illustrated using metre-scale faults from the Mesozoic rocks of Somerset and examples from the literature.

  10. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  11. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of abort triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of abort triggers.

  12. Application of Fault Management Theory to the Quantitative Selection of a Launch Vehicle Abort Trigger Suite

    NASA Technical Reports Server (NTRS)

    Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.

    2014-01-01

    This paper describes the quantitative application of the theory of System Health Management and its operational subset, Fault Management, to the selection of Abort Triggers for a human-rated launch vehicle, the United States' National Aeronautics and Space Administration's (NASA) Space Launch System (SLS). The results demonstrate the efficacy of the theory to assess the effectiveness of candidate failure detection and response mechanisms to protect humans from time-critical and severe hazards. The quantitative method was successfully used on the SLS to aid selection of its suite of Abort Triggers.

  13. Porosity variations in and around normal fault zones: implications for fault seal and geomechanics

    NASA Astrophysics Data System (ADS)

    Healy, David; Neilson, Joyce; Farrell, Natalie; Timms, Nick; Wilson, Moyra

    2015-04-01

    clear lithofacies control on the Vp-porosity and the Vs-Vp relationships for faulted limestones. Using porosity patterns quantified in naturally deformed rocks we have modelled their effect on the mechanical stability of fluid-saturated fault zones in the subsurface. Poroelasticity theory predicts that variations in fluid pressure could influence fault stability. Anisotropic patterns of porosity in and around fault zones can - depending on their orientation and intensity - lead to an increase in fault stability in response to a rise in fluid pressure, and a decrease in fault stability for a drop in fluid pressure. These predictions are the exact opposite of the accepted role of effective stress in fault stability. Our work has provided new data on the spatial and statistical variation of porosity in fault zones. Traditionally considered as an isotropic and scalar value, porosity and pore networks are better considered as anisotropic and as scale-dependent statistical distributions. The geological processes controlling the evolution of porosity are complex. Quantifying patterns of porosity variation is an essential first step in a wider quest to better understand deformation processes in and around normal fault zones. Understanding porosity patterns will help us to make more useful predictive tools for all agencies involved in the study and management of fluids in the subsurface.

  14. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  15. Machine Learning of Fault Friction

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

  16. Mechanisms, Monitoring and Modeling Earth Fissure generation and Fault activation due to subsurface Fluid exploitation (M3EF3): A UNESCO-IGCP project in partnership with the UNESCO-IHP Working Group on Land Subsidence

    NASA Astrophysics Data System (ADS)

    Teatini, P.; Carreon-Freyre, D.; Galloway, D. L.; Ye, S.

    2015-12-01

    Land subsidence due to groundwater extraction was recently mentioned as one of the most urgent threats to sustainable development in the latest UNESCO IHP-VIII (2014-2020) strategic plan. Although advances have been made in understanding, monitoring, and predicting subsidence, the influence of differential vertical compaction, horizontal displacements, and hydrostratigraphic and structural features in groundwater systems on localized near-surface ground ruptures is still poorly understood. The nature of ground failure may range from fissuring, i.e., formation of an open crack, to faulting, i.e., differential offset of the opposite sides of the failure plane. Ground ruptures associated with differential subsidence have been reported from many alluvial basins in semiarid and arid regions, e.g. China, India, Iran, Mexico, Saudi Arabia, Spain, and the United States. These ground ruptures strongly impact urban, industrial, and agricultural infrastructures, and affect socio-economic and cultural development. Leveraging previous collaborations, this year the UNESCO Working Group on Land Subsidence began the scientific cooperative project M3EF3 in collaboration with the UNESCO International Geosciences Programme (IGCP n.641; www.igcp641.org) to improve understanding of the processes involved in ground rupturing associated with the exploitation of subsurface fluids, and to facilitate the transfer of knowledge regarding sustainable groundwater management practices in vulnerable aquifer systems. The project is developing effective tools to help manage geologic risks associated with these types of hazards, and formulating recommendations pertaining to the sustainable use of subsurface fluid resources for urban and agricultural development in susceptible areas. The partnership between the UNESCO IHP and IGCP is ensuring that multiple scientific competencies required to optimally investigate earth fissuring and faulting caused by groundwater withdrawals are being employed.

  17. Fault-tolerant onboard digital information switching and routing for communications satellites

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.; Kim, Heechul

    1993-01-01

    The NASA Lewis Research Center is developing an information-switching processor for future meshed very-small-aperture terminal (VSAT) communications satellites. The information-switching processor will switch and route baseband user data onboard the VSAT satellite to connect thousands of Earth terminals. Fault tolerance is a critical issue in developing information-switching processor circuitry that will provide and maintain reliable communications services. In parallel with the conceptual development of the meshed VSAT satellite network architecture, NASA designed and built a simple test bed for developing and demonstrating baseband switch architectures and fault-tolerance techniques. The meshed VSAT architecture and the switching demonstration test bed are described, and the initial switching architecture and the fault-tolerance techniques that were developed and tested are discussed.

  18. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  19. Lacustrine Paleoseismology Reveals Earthquake Segmentation of the Alpine Fault, New Zealand

    NASA Astrophysics Data System (ADS)

    Howarth, J. D.; Fitzsimons, S.; Norris, R.; Langridge, R. M.

    2013-12-01

    Transform plate boundary faults accommodate high rates of strain and are capable of producing large (Mw>7.0) to great (Mw>8.0) earthquakes that pose significant seismic hazard. The Alpine Fault in New Zealand is one of the longest, straightest and fastest slipping plate boundary transform faults on Earth and produces earthquakes at quasi-periodic intervals. Theoretically, the fault's linearity, isolation from other faults and quasi-periodicity should promote the generation of earthquakes that have similar magnitudes over multiple seismic cycles. We test the hypothesis that the Alpine Fault produces quasi-regular earthquakes that contiguously rupture the southern and central fault segments, using a novel lacustrine paleoseismic proxy to reconstruct spatial and temporal patterns of fault rupture over the last 2000 years. In three lakes located close to the Alpine Fault the last nine earthquakes are recorded as megaturbidites formed by co-seismic subaqueous slope failures, which occur when shaking exceeds Modified Mercalli (MM) VII. When the fault ruptures adjacent to a lake the co-seismic megaturbidites are overlain by stacks of turbidites produced by enhanced fluvial sediment fluxes from earthquake-induced landslides. The turbidite stacks record shaking intensities of MM>IX in the lake catchments and can be used to map the spatial location of fault rupture. The lake records can be dated precisely, facilitating meaningful along strike correlations, and the continuous records allow earthquakes closely spaced in time on adjacent fault segments to be distinguished. The results show that while multi-segment ruptures of the Alpine Fault occurred during most seismic cycles, sequential earthquakes on adjacent segments and single segment ruptures have also occurred. The complexity of the fault rupture pattern suggests that the subtle variations in fault geometry, sense of motion and slip rate that have been used to distinguish the central and southern segments of the Alpine

  20. Satellite and earth science data management activities at the U.S. geological survey's EROS data center

    USGS Publications Warehouse

    Carneggie, David M.; Metz, Gary G.; Draeger, William C.; Thompson, Ralph J.

    1991-01-01

    The U.S. Geological Survey's Earth Resources Observation Systems (EROS) Data Center, the national archive for Landsat data, has 20 years of experience in acquiring, archiving, processing, and distributing Landsat and earth science data. The Center is expanding its satellite and earth science data management activities to support the U.S. Global Change Research Program and the National Aeronautics and Space Administration (NASA) Earth Observing System Program. The Center's current and future data management activities focus on land data and include: satellite and earth science data set acquisition, development and archiving; data set preservation, maintenance and conversion to more durable and accessible archive medium; development of an advanced Land Data Information System; development of enhanced data packaging and distribution mechanisms; and data processing, reprocessing, and product generation systems.

  1. San Andreas fault geometry in the Parkfield, California, region

    USGS Publications Warehouse

    Simpson, R.W.; Barall, M.; Langbein, J.; Murray, J.R.; Rymer, M.J.

    2006-01-01

    In map view, aftershocks of the 2004 Parkfield earthquake lie along a line that forms a straighter connection between San Andreas fault segments north and south of the Parkfield reach than does the mapped trace of the fault itself. A straightedge laid on a geologic map of Central California reveals a ???50-km-long asymmetric northeastward warp in the Parkfield reach of the fault. The warp tapers gradually as it joins the straight, creeping segment of the San Andreas to the north-west, but bends abruptly across Cholame Valley at its southeast end to join the straight, locked segment that last ruptured in 1857. We speculate that the San Andreas fault surface near Parkfield has been deflected in its upper ???6 km by nonelastic behavior of upper crustal rock units. These units and the fault surface itself are warped during periods between large 1857-type earthquakes by the presence of the 1857-locked segment to the south, which buttresses intermittent coseismic and continuous aseismic slip on the Parkfield reach. Because of nonelastic behavior, the warping is not completely undone when an 1857-type event occurs, and the upper portion of the three-dimensional fault surface is slowly ratcheted into an increasingly prominent bulge. Ultimately, the fault surface probably becomes too deformed for strike-slip motion, and a new, more vertical connection to the Earth's surface takes over, perhaps along the Southwest Fracture Zone. When this happens a wedge of material currently west of the main trace will be stranded on the east side of the new main trace.

  2. Energy budget and propagation of faults via shearing and opening using work optimization

    NASA Astrophysics Data System (ADS)

    Madden, Elizabeth H.; Cooke, Michele L.; McBeck, Jessica

    2017-08-01

    We present numerical models of faults propagating by work optimization in a homogeneous medium. These simulations allow quantification and comparison of the energy budgets of fault growth by shear versus tensile failure. The energy consumed by growth of a fault, Wgrow, propagating by in-line shearing is 76% of the total energy associated with that growth, while 24% is spent on frictional work during propagation. Wgrow for a fault propagating into intact rock by tensile failure, at an angle to the parent fault, consumes 60% of the work budget, while only 6% is consumed by frictional work associated with propagation. Following the conservation of energy, this leaves 34% of the energy budget available for other activities and suggests that out-of-plane propagation of faults in Earth's crust may release energy for other processes, such as permanent damage zone formation or rupture acceleration. Comparison of these estimates of Wgrow with estimates of the critical energy release rate and earthquake fracture energy at several scales underscores their theoretical similarities and their dependence on stress drop.

  3. Modeling, Detection, and Disambiguation of Sensor Faults for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai F.; Curran, Simon

    2009-01-01

    Sensor faults continue to be a major hurdle for systems health management to reach its full potential. At the same time, few recorded instances of sensor faults exist. It is equally difficult to seed particular sensor faults. Therefore, research is underway to better understand the different fault modes seen in sensors and to model the faults. The fault models can then be used in simulated sensor fault scenarios to ensure that algorithms can distinguish between sensor faults and system faults. The paper illustrates the work with data collected from an electro-mechanical actuator in an aerospace setting, equipped with temperature, vibration, current, and position sensors. The most common sensor faults, such as bias, drift, scaling, and dropout were simulated and injected into the experimental data, with the goal of making these simulations as realistic as feasible. A neural network based classifier was then created and tested on both experimental data and the more challenging randomized data sequences. Additional studies were also conducted to determine sensitivity of detection and disambiguation efficacy to severity of fault conditions.

  4. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.; Johnson, Stephen B.

    2013-01-01

    The draft NASA Fault Management (FM) Handbook (2012) states that Fault Management (FM) is a "part of systems engineering", and that it "demands a system-level perspective" (NASAHDBK- 1002, 7). What, exactly, is the relationship between systems engineering and FM? To NASA, systems engineering (SE) is "the art and science of developing an operable system capable of meeting requirements within often opposed constraints" (NASA/SP-2007-6105, 3). Systems engineering starts with the elucidation and development of requirements, which set the goals that the system is to achieve. To achieve these goals, the systems engineer typically defines functions, and the functions in turn are the basis for design trades to determine the best means to perform the functions. System Health Management (SHM), by contrast, defines "the capabilities of a system that preserve the system's ability to function as intended" (Johnson et al., 2011, 3). Fault Management, in turn, is the operational subset of SHM, which detects current or future failures, and takes operational measures to prevent or respond to these failures. Failure, in turn, is the "unacceptable performance of intended function." (Johnson 2011, 605) Thus the relationship of SE to FM is that SE defines the functions and the design to perform those functions to meet system goals and requirements, while FM detects the inability to perform those functions and takes action. SHM and FM are in essence "the dark side" of SE. For every function to be performed (SE), there is the possibility that it is not successfully performed (SHM); FM defines the means to operationally detect and respond to this lack of success. We can also describe this in terms of goals: for every goal to be achieved, there is the possibility that it is not achieved; FM defines the means to operationally detect and respond to this inability to achieve the goal. This brief description of relationships between SE, SHM, and FM provide hints to a modeling approach to

  5. Influence of fault trend, fault bends, and fault convergence on shallow structure, geomorphology, and hazards, Hosgri strike-slip fault, offshore central California

    NASA Astrophysics Data System (ADS)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.

    2012-12-01

    We mapped a ~94-km-long portion of the right-lateral Hosgri Fault Zone from Point Sal to Piedras Blancas in offshore central California using high-resolution seismic reflection profiles, marine magnetic data, and multibeam bathymetry. The database includes 121 seismic profiles across the fault zone and is perhaps the most comprehensive reported survey of the shallow structure of an active strike-slip fault. These data document the location, length, and near-surface continuity of multiple fault strands, highlight fault-zone heterogeneity, and demonstrate the importance of fault trend, fault bends, and fault convergences in the development of shallow structure and tectonic geomorphology. The Hosgri Fault Zone is continuous through the study area passing through a broad arc in which fault trend changes from about 338° to 328° from south to north. The southern ~40 km of the fault zone in this area is more extensional, resulting in accommodation space that is filled by deltaic sediments of the Santa Maria River. The central ~24 km of the fault zone is characterized by oblique convergence of the Hosgri Fault Zone with the more northwest-trending Los Osos and Shoreline Faults. Convergence between these faults has resulted in the formation of local restraining and releasing fault bends, transpressive uplifts, and transtensional basins of varying size and morphology. We present a hypothesis that links development of a paired fault bend to indenting and bulging of the Hosgri Fault by a strong crustal block translated to the northwest along the Shoreline Fault. Two diverging Hosgri Fault strands bounding a central uplifted block characterize the northern ~30 km of the Hosgri Fault in this area. The eastern Hosgri strand passes through releasing and restraining bends; the releasing bend is the primary control on development of an elongate, asymmetric, "Lazy Z" sedimentary basin. The western strand of the Hosgri Fault Zone passes through a significant restraining bend and

  6. FTAPE: A fault injection tool to measure fault tolerance

    NASA Technical Reports Server (NTRS)

    Tsai, Timothy K.; Iyer, Ravishankar K.

    1995-01-01

    The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.

  7. Fault compaction and overpressured faults: results from a 3-D model of a ductile fault zone

    NASA Astrophysics Data System (ADS)

    Fitzenz, D. D.; Miller, S. A.

    2003-10-01

    A model of a ductile fault zone is incorporated into a forward 3-D earthquake model to better constrain fault-zone hydraulics. The conceptual framework of the model fault zone was chosen such that two distinct parts are recognized. The fault core, characterized by a relatively low permeability, is composed of a coseismic fault surface embedded in a visco-elastic volume that can creep and compact. The fault core is surrounded by, and mostly sealed from, a high permeability damaged zone. The model fault properties correspond explicitly to those of the coseismic fault core. Porosity and pore pressure evolve to account for the viscous compaction of the fault core, while stresses evolve in response to the applied tectonic loading and to shear creep of the fault itself. A small diffusive leakage is allowed in and out of the fault zone. Coseismically, porosity is created to account for frictional dilatancy. We show in the case of a 3-D fault model with no in-plane flow and constant fluid compressibility, pore pressures do not drop to hydrostatic levels after a seismic rupture, leading to an overpressured weak fault. Since pore pressure plays a key role in the fault behaviour, we investigate coseismic hydraulic property changes. In the full 3-D model, pore pressures vary instantaneously by the poroelastic effect during the propagation of the rupture. Once the stress state stabilizes, pore pressures are incrementally redistributed in the failed patch. We show that the significant effect of pressure-dependent fluid compressibility in the no in-plane flow case becomes a secondary effect when the other spatial dimensions are considered because in-plane flow with a near-lithostatically pressured neighbourhood equilibrates at a pressure much higher than hydrostatic levels, forming persistent high-pressure fluid compartments. If the observed faults are not all overpressured and weak, other mechanisms, not included in this model, must be at work in nature, which need to be

  8. Fault tolerant operation of switched reluctance machine

    NASA Astrophysics Data System (ADS)

    Wang, Wei

    The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

  9. Constructing constitutive relationships for seismic and aseismic fault slip

    USGS Publications Warehouse

    Beeler, N.M.

    2009-01-01

    For the purpose of modeling natural fault slip, a useful result from an experimental fault mechanics study would be a physically-based constitutive relation that well characterizes all the relevant observations. This report describes an approach for constructing such equations. Where possible the construction intends to identify or, at least, attribute physical processes and contact scale physics to the observations such that the resulting relations can be extrapolated in conditions and scale between the laboratory and the Earth. The approach is developed as an alternative but is based on Ruina (1983) and is illustrated initially by constructing a couple of relations from that study. In addition, two example constitutive relationships are constructed; these describe laboratory observations not well-modeled by Ruina's equations: the unexpected shear-induced weakening of silica-rich rocks at high slip speed (Goldsby and Tullis, 2002) and fault strength in the brittle ductile transition zone (Shimamoto, 1986). The examples, provided as illustration, may also be useful for quantitative modeling.

  10. Orogen-scale uplift in the central Italian Apennines drives episodic behaviour of earthquake faults.

    PubMed

    Cowie, P A; Phillips, R J; Roberts, G P; McCaffrey, K; Zijerveld, L J J; Gregory, L C; Faure Walker, J; Wedmore, L N J; Dunai, T J; Binnie, S A; Freeman, S P H T; Wilcken, K; Shanks, R P; Huismans, R S; Papanikolaou, I; Michetti, A M; Wilkinson, M

    2017-03-21

    Many areas of the Earth's crust deform by distributed extensional faulting and complex fault interactions are often observed. Geodetic data generally indicate a simpler picture of continuum deformation over decades but relating this behaviour to earthquake occurrence over centuries, given numerous potentially active faults, remains a global problem in hazard assessment. We address this challenge for an array of seismogenic faults in the central Italian Apennines, where crustal extension and devastating earthquakes occur in response to regional surface uplift. We constrain fault slip-rates since ~18 ka using variations in cosmogenic 36 Cl measured on bedrock scarps, mapped using LiDAR and ground penetrating radar, and compare these rates to those inferred from geodesy. The 36 Cl data reveal that individual faults typically accumulate meters of displacement relatively rapidly over several thousand years, separated by similar length time intervals when slip-rates are much lower, and activity shifts between faults across strike. Our rates agree with continuum deformation rates when averaged over long spatial or temporal scales (10 4  yr; 10 2  km) but over shorter timescales most of the deformation may be accommodated by <30% of the across-strike fault array. We attribute the shifts in activity to temporal variations in the mechanical work of faulting.

  11. The Ural-Herirud transcontinental postcollisional strike-slip fault and its role in the formation of the Earth's crust

    NASA Astrophysics Data System (ADS)

    Leonov, Yu. G.; Volozh, Yu. A.; Antipov, M. P.; Kheraskova, T. N.

    2015-11-01

    The paper considers the morphology, deep structure, and geodynamic features of the Ural-Herirud postorogenic strike-slip fault (UH fault), along which the Moho (the "M") shifts along the entire axial zone of the Ural Orogen, then further to the south across the Scythian-Turan Plate to the Herirud sublatitudinal fault in Afghanistan. The postcollisional character of dextral displacements along the Ural-Herirud fault and its Triassic-Jurassic age are proven. We have estimated the scale of displacements and made an attempt to make a paleoreconstruction, illustrating the relationship between the Variscides of the Urals and the Tien Shan before tectonic displacements. The analysis of new data includes the latest generation of 1: 200000 geological maps and the regional seismic profiling data obtained in the most elevated part of the Urals (from the seismic profile of the Middle Urals in the north to the Uralseis seismic profile in the south), as well as within the sedimentary cover of the Turan Plate, from Mugodzhary to the southern boundaries of the former water area of the Aral Sea. General typomorphic signs of transcontinental strike-slip fault systems are considered and the structural model of the Ural-Herirud postcollisional strike-slip fault is presented.

  12. An expert systems approach to automated fault management in a regenerative life support subsystem

    NASA Technical Reports Server (NTRS)

    Malin, J. T.; Lance, N., Jr.

    1986-01-01

    This paper describes FIXER, a prototype expert system for automated fault management in a regenerative life support subsystem typical of Space Station applications. The development project provided an evaluation of the use of expert systems technology to enhance controller functions in space subsystems. The software development approach permitted evaluation of the effectiveness of direct involvement of the expert in design and development. The approach also permitted intensive observation of the knowledge and methods of the expert. This paper describes the development of the prototype expert system and presents results of the evaluation.

  13. Goal-Function Tree Modeling for Systems Engineering and Fault Management

    NASA Technical Reports Server (NTRS)

    Johnson, Stephen B.; Breckenridge, Jonathan T.

    2013-01-01

    This paper describes a new representation that enables rigorous definition and decomposition of both nominal and off-nominal system goals and functions: the Goal-Function Tree (GFT). GFTs extend the concept and process of functional decomposition, utilizing state variables as a key mechanism to ensure physical and logical consistency and completeness of the decomposition of goals (requirements) and functions, and enabling full and complete traceabilitiy to the design. The GFT also provides for means to define and represent off-nominal goals and functions that are activated when the system's nominal goals are not met. The physical accuracy of the GFT, and its ability to represent both nominal and off-nominal goals enable the GFT to be used for various analyses of the system, including assessments of the completeness and traceability of system goals and functions, the coverage of fault management failure detections, and definition of system failure scenarios.

  14. Quantifying Coseismic Normal Fault Rupture at the Seafloor: The 2004 Les Saintes Earthquake Along the Roseau Fault (French Antilles)

    NASA Astrophysics Data System (ADS)

    Olive, J. A. L.; Escartin, J.; Leclerc, F.; Garcia, R.; Gracias, N.; Odemar Science Party, T.

    2016-12-01

    While >70% of Earth's seismicity is submarine, almost all observations of earthquake-related ruptures and surface deformation are restricted to subaerial environments. Such observations are critical for understanding fault behavior and associated hazards (including tsunamis), but are not routinely conducted at the seafloor due to obvious constraints. During the 2013 ODEMAR cruise we used autonomous and remotely operated vehicles to map the Roseau normal Fault (Lesser Antilles), source of the 2004 Mw6.3 earthquake and associated tsunami (<3.5m run-up). These vehicles acquired acoustic (multibeam bathymetry) and optical data (video and electronic images) spanning from regional (>1 km) to outcrop (<1 m) scales. These high-resolution submarine observations, analogous to those routinely conducted subaerially, rely on advanced image and video processing techniques, such as mosaicking and structure-from-motion (SFM). We identify sub-vertical fault slip planes along the Roseau scarp, displaying coseismic deformation structures undoubtedly due to the 2004 event. First, video mosaicking allows us to identify the freshly exposed fault plane at the base of one of these scarps. A maximum vertical coseismic displacement of 0.9 m can be measured from the video-derived terrain models and the texture-mapped imagery, which have better resolution than any available acoustic systems (<10 cm). Second, seafloor photomosaics allow us to identify and map both additional sub-vertical fault scarps, and cracks and fissures at their base, recording hangingwall damage from the same event. These observations provide critical parameters to understand the seismic cycle and long-term seismic behavior of this submarine fault. Our work demonstrates the feasibility of extensive, high-resolution underwater surveys using underwater vehicles and novel imaging techniques, thereby opening new possibilities to study recent seafloor changes associated with tectonic, volcanic, or hydrothermal activity.

  15. On the design of fault-tolerant robotic manipulator systems

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert

    1993-01-01

    Robotic systems are finding increasing use in space applications. Many of these devices are going to be operational on board the Space Station Freedom. Fault tolerance has been deemed necessary because of the criticality of the tasks and the inaccessibility of the systems to maintenance and repair. Design for fault tolerance in manipulator systems is an area within robotics that is without precedence in the literature. In this paper, we will attempt to lay down the foundations for such a technology. Design for fault tolerance demands new and special approaches to design, often at considerable variance from established design practices. These design aspects, together with reliability evaluation and modeling tools, are presented. Mechanical architectures that employ protective redundancies at many levels and have a modular architecture are then studied in detail. Once a mechanical architecture for fault tolerance has been derived, the chronological stages of operational fault tolerance are investigated. Failure detection, isolation, and estimation methods are surveyed, and such methods for robot sensors and actuators are derived. Failure recovery methods are also presented for each of the protective layers of redundancy. Failure recovery tactics often span all of the layers of a control hierarchy. Thus, a unified framework for decision-making and control, which orchestrates both the nominal redundancy management tasks and the failure management tasks, has been derived. The well-developed field of fault-tolerant computers is studied next, and some design principles relevant to the design of fault-tolerant robot controllers are abstracted. Conclusions are drawn, and a road map for the design of fault-tolerant manipulator systems is laid out with recommendations for a 10 DOF arm with dual actuators at each joint.

  16. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, Lian; Li, Haibing; Brodsky, Emily

    2013-04-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (~200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ~ 30o, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  17. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    NASA Astrophysics Data System (ADS)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  18. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  19. Northern California LIDAR Data: A Tool for Mapping the San Andreas Fault and Pleistocene Marine Terraces in Heavily Vegetated Terrain

    NASA Astrophysics Data System (ADS)

    Prentice, C. S.; Crosby, C. J.; Harding, D. J.; Haugerud, R. A.; Merritts, D. J.; Gardner, T. W.; Koehler, R. D.; Baldwin, J. N.

    2003-12-01

    Recent acquisition of airborne LIDAR (also known as ALSM) data covering approximately 418 square kilometers of coastal northern California provides a powerful new tool for mapping geomorphic features related to the San Andreas Fault and coastal uplift. LIDAR data has been previously used in the Puget Lowland region of Washington to identify and map Holocene faults and uplifted shorelines concealed under dense vegetation (Haugerud et al., 2003; see http://pugetsoundlidar.org). Our effort represents the first use of LIDAR data for this purpose along the San Andreas Fault. This data set is the result of a collaborative effort between NASA Solid Earth and Natural Hazards Program, Goddard Space Flight Center, Stennis Space Center, USGS, and TerraPoint, LLC. The coverage extends from near Fort Ross, California, in Sonoma County, along the coast northward to the town of Mendocino, in Mendocino County, and as far inland as about 1-3 km east of the San Andreas Fault. The survey area includes about 70 km of the northern San Andreas Fault under dense redwood forest, and Pleistocene coastal marine terraces both north and south of the fault. The average data density is two laser pulses per square meter, with up to four LIDAR returns per pulse. Returns are classified as ground or vegetation, allowing construction of both canopy-top and bare-earth DEMs with 1.8m grid spacing. Vertical accuracy is better than 20 cm RMSE, confirmed by a network of ground-control points established using high-precision GPS surveying. We are using hillshade images generated from the bare-earth DEMs to begin detailed mapping of geomorphic features associated with San Andreas Fault traces, such as scarps, offset streams, linear valleys, shutter ridges, and sag ponds. In addition, we are using these data in conjunction with field mapping and interpretation of conventional 1:12,000 and 1:6000 scale aerial photographs to map and correlate marine terraces to better understand rates of coastal uplift, and

  20. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    USGS Publications Warehouse

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  1. Earth Science Keyword Stewardship: Access and Management through NASA's Global Change Master Directory (GCMD) Keyword Management System (KMS)

    NASA Astrophysics Data System (ADS)

    Stevens, T.; Olsen, L. M.; Ritz, S.; Morahan, M.; Aleman, A.; Cepero, L.; Gokey, C.; Holland, M.; Cordova, R.; Areu, S.; Cherry, T.; Tran-Ho, H.

    2012-12-01

    Discovering Earth science data can be complex if the catalog holding the data lacks structure. Controlled keyword vocabularies within metadata catalogues can improve data discovery. NASA's Global Change Master Directory's (GCMD) Keyword Management System (KMS) is a recently released a RESTful web service for managing and providing access to controlled keywords (science keywords, service keywords, platforms, instruments, providers, locations, projects, data resolution, etc.). The KMS introduces a completely new paradigm for the use and management of the keywords and allows access to these keywords as SKOS Concepts (RDF), OWL, standard XML, and CSV. A universally unique identifier (UUID) is automatically assigned to each keyword, which uniquely identifies each concept and its associated information. A component of the KMS is the keyword manager, an internal tool that allows GCMD science coordinators to manage concepts. This includes adding, modifying, and deleting broader, narrower, or related concepts and associated definitions. The controlled keyword vocabulary represents over 20 years of effort and collaboration with the Earth science community. The maintenance, stability, and ongoing vigilance in maintaining mutually exclusive and parallel keyword lists is important for a "normalized" search and discovery, and provides a unique advantage for the science community. Modifications and additions are made based on community suggestions and internal review. To help maintain keyword integrity, science keyword rules and procedures for modification of keywords were developed. This poster will highlight the use of the KMS as a beneficial service for the stewardship and access of the GCMD keywords. Users will learn how to access the KMS and utilize the keywords. Best practices for managing an extensive keyword hierarchy will also be discussed. Participants will learn the process for making keyword suggestions, which subsequently help in building a controlled keyword

  2. Fault Tolerance Middleware for a Multi-Core System

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.

    2012-01-01

    Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the

  3. The Heritage of Earth Science Applications in Policy, Business, and Management of Natural Resources

    NASA Astrophysics Data System (ADS)

    Macauley, M.

    2012-12-01

    From the first hand-held cameras on the Gemini space missions to present day satellite instruments, Earth observations have enhanced the management of natural resources including water, land, and air. Applications include the development of new methodology (for example, developing and testing algorithms or demonstrating how data can be used) and the direct use of data in decisionmaking and policy implementation. Using well-defined bibliographic search indices to systematically survey a broad social science literature, this project enables identification of a host of well-documented, practical and direct applications of Earth science data in resource management. This literature has not previously been well surveyed, aggregated, or analyzed for the heritage of lessons learned in practical application of Earth science data. In the absence of such a survey, the usefulness of Earth science data is underestimated and the factors that make people want to use -- and able to use -- the data are poorly understood. The project extends and updates previous analysis of social science applications of Landsat data to show their contemporary, direct use in new policy, business, and management activities and decisionmaking. The previous surveys (for example, Blumberg and Jacobson 1997; National Research Council 1998) find that the earliest attempts to use data are almost exclusively testing of methodology rather than direct use in resource management. Examples of methodology prototyping include Green et al. (1997) who demonstrate use of remote sensing to detect and monitor changes in land cover and use, Cowen et al. (1995) who demonstrate design and integration of GIS for environmental applications, Hutchinson (1991) who shows uses of data for famine early warning, and Brondizio et al. (1996) who show the link of thematic mapper data with botanical data. Blumberg and Jacobson (in Acevedo et al. 1996) show use of data in a study of urban development in the San Francisco Bay and the

  4. Program on Earth Observation Data Management Systems (EODMS), appendixes

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Gohagan, J. K.; Hill, C. T.; Morgan, R. P.; Bay, S. M.; Foutch, T. K.; Hays, T. R.; Ballard, R. J.; Makin, K. P.; Power, M. A.

    1976-01-01

    The needs of state, regional, and local agencies involved in natural resources management in Illinois, Iowa, Minnesota, Missouri, and Wisconsin are investigated to determine the design of satellite remotely sensed derivable information products. It is concluded that an operational Earth Observation Data Management System (EODMS) will be most beneficial if it provides a full range of services - from raw data acquisition to interpretation and dissemination of final information products. Included is a cost and performance analysis of alternative processing centers, and an assessment of the impacts of policy, regulation, and government structure on implementing large scale use of remote sensing technology in this community of users.

  5. Comparative study of two active faults in different stages of the earthquake cycle in central Japan -The Atera fault (with 1586 Tensho earthquake) and the Nojima fault (with 1995 Kobe earthquake)-

    NASA Astrophysics Data System (ADS)

    Matsuda, T.; Omura, K.; Ikeda, R.

    2003-12-01

    National Research Institute for Earth Science and Disaster Prevention (NIED) has been conducting _gFault zone drilling_h. Fault zone drilling is especially important in understanding the structure, composition, and physical properties of an active fault. In the Chubu district of central Japan, large active faults such as the Atotsugawa (with 1858 Hietsu earthquake) and the Atera (with 1586 Tensho earthquake) faults exist. After the occurrence of the 1995 Kobe earthquake, it has been widely recognized that direct measurements in fault zones by drilling. This time, we describe about the Atera fault and the Nojima fault. Because, these two faults are similar in geological situation (mostly composed of granitic rocks), so it is easy to do comparative study of drilling investigation. The features of the Atera fault, which have been dislocated by the 1586 Tensho earthquake, are as follows. Total length is about 70 km. That general trend is NW45 degree with a left-lateral strike slip. Slip rate is estimated as 3-5 m / 1000 years. Seismicity is very low at present and lithologies around the fault are basically granitic rocks and rhyolite. Six boreholes have been drilled from the depth of 400 m to 630 m. Four of these boreholes (Hatajiri, Fukuoka, Ueno and Kawaue) are located on a line crossing in a direction perpendicular to the Atera fault. In the Kawaue well, mostly fractured and alternating granitic rock continued from the surface to the bottom at 630 m. X-ray fluorescence analysis (XRF) is conducted to estimate the amount of major chemical elements using the glass bead method for core samples. The amounts of H20+ are about from 0.5 to 2.5 weight percent. This fractured zone is also characterized by the logging data such as low resistivity, low P-wave velocity, low density and high neutron porosity. The 1995 Kobe (Hyogo-ken Nanbu) earthquake occurred along the NE-SW-trending Rokko-Awaji fault system, and the Nojima fault appeared on the surface on Awaji Island when this

  6. Internal Structure of Taiwan Chelungpu Fault Zone Gouges

    NASA Astrophysics Data System (ADS)

    Song, Y.; Song, S.; Tang, M.; Chen, F.; Chen, Y.

    2005-12-01

    Gouge formation is found to exist in brittle faults at all scale (1). This fine-grain gouge is thought to control earthquake instability. And thus investigating the gouge textures and compositions is very important to an understanding of the earthquake process. Employing the transmission electron microscope (TEM) and a new transmission X-ray microscope (TXM), we study the internal structure of fault zone gouges from the cores of the Taiwan Chelungpu-fault Drilling Project (TCDP), which drilled in the fault zone of 1999 Chi-Chi earthquake. This X-ray microscope have installed at beamline BL01B of the Taiwan Light Source, National Synchrotron Radiation Research Center (NSRRC). It provides 2D imaging and 3D tomography at energy 8-11 keV with a spatial resolution of 25-60 nm, and is equipped with the Zernike-phase contrast capability for imaging light materials. In this work, we show the measurements of gouge texture, particle size distribution and 3D structure of the ultracataclasite in fault gouges within 12 cm about 1111.29 m depth. These characterizations in transition from the fault core to damage zone are related to the comminuting and the fracture energy in the earthquake faulting. The TXM data recently shows the particle size distributions of the ultracataclasite are between 150 nm and 900 nm in diameter. We will keep analyzing the characterization of particle size distribution, porosity and 3D structure of the fault zone gouges in transition from the fault core to damage zone to realize the comminuting and fracture surface energy in the earthquake faulting(2-5).The results may ascertain the implication of the nucleation, growth, transition, structure and permeability of the fault zones(6-8). Furthermore, it may be possible to infer the mechanism of faulting, the physical and chemical property of the fault, and the nucleation of the earthquake. References 1) B. Wilson, T. Dewerw, Z. Reches and J. Brune, Nature, 434 (2005) 749. 2) S. E. Schulz and J. P. Evans

  7. Statistical mechanics and scaling of fault populations with increasing strain in the Corinth Rift

    NASA Astrophysics Data System (ADS)

    Michas, Georgios; Vallianatos, Filippos; Sammonds, Peter

    2015-12-01

    Scaling properties of fracture/fault systems are studied in order to characterize the mechanical properties of rocks and to provide insight into the mechanisms that govern fault growth. A comprehensive image of the fault network in the Corinth Rift, Greece, obtained through numerous field studies and marine geophysical surveys, allows for the first time such a study over the entire area of the Rift. We compile a detailed fault map of the area and analyze the scaling properties of fault trace-lengths by using a statistical mechanics model, derived in the framework of generalized statistical mechanics and associated maximum entropy principle. By using this framework, a range of asymptotic power-law to exponential-like distributions are derived that can well describe the observed scaling patterns of fault trace-lengths in the Rift. Systematic variations and in particular a transition from asymptotic power-law to exponential-like scaling are observed to be a function of increasing strain in distinct strain regimes in the Rift, providing quantitative evidence for such crustal processes in a single tectonic setting. These results indicate the organization of the fault system as a function of brittle strain in the Earth's crust and suggest there are different mechanisms for fault growth in the distinct parts of the Rift. In addition, other factors such as fault interactions and the thickness of the brittle layer affect how the fault system evolves in time. The results suggest that regional strain, fault interactions and the boundary condition of the brittle layer may control fault growth and the fault network evolution in the Corinth Rift.

  8. Spatial Patterns of Geomorphic Surface Features and Fault Morphology Based on Diffusion Equation Modeling of the Kumroch Fault Kamchatka Peninsula, Russia

    NASA Astrophysics Data System (ADS)

    Heinlein, S. N.

    2013-12-01

    Remote sensing data sets are widely used for evaluation of surface manifestations of active tectonics. This study utilizes ASTER GDEM and Landsat ETM+ data sets with Google Earth images draped over terrain models. This study evaluates 1) the surrounding surface geomorphology of the study area with these data sets and 2) the morphology of the Kumroch Fault using diffusion modeling to estimate constant diffusivity (κ) and estimate slip rates by means of real ground data measured across fault scarps by Kozhurin et al. (2006). Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faults surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. Profile modeling of scarps collected by Kozhurin et al. (2006) formed by several events distributed through time and were evaluated using a constant slip rate (CSR) solution which yields a value A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on the fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling estimated of κ range from 8m2/ka - 14m2/ka on the Kumroch Fault which indicates a slip rates of 0.6 mm/yr - 1.0 mm/yr since 3.4 ka -3.7 ka. This method provides a quick and inexpensive way to gather data for a regional tectonic study and establish estimated rates of tectonic activity. Analyses of the remote sensing data are providing new insight into the role of active tectonics within the region. Results from fault scarp diffusion models of Mattson and Bruhn (2001) and DuRoss and Bruhn (2004) and Kozhurin et al. (2006), Kozhurin (2007), Kozhurin et al. (2008) and Pinegina et al. 2012 trench profiles of the KF as calibrated age fault scarp diffusion rates were estimated. (-) mean that no data could be determined.

  9. CEOS Contributions to Informing Energy Management and Policy Decision Making Using Space-Based Earth Observations

    NASA Technical Reports Server (NTRS)

    Eckman, Richard S.

    2009-01-01

    Earth observations are playing an increasingly significant role in informing decision making in the energy sector. In renewable energy applications, space-based observations now routinely augment sparse ground-based observations used as input for renewable energy resource assessment applications. As one of the nine Group on Earth Observations (GEO) societal benefit areas, the enhancement of management and policy decision making in the energy sector is receiving attention in activities conducted by the Committee on Earth Observation Satellites (CEOS). CEOS has become the "space arm" for the implementation of the Global Earth Observation System of Systems (GEOSS) vision. It is directly supporting the space-based, near-term tasks articulated in the GEO three-year work plan. This paper describes a coordinated program of demonstration projects conducted by CEOS member agencies and partners to utilize Earth observations to enhance energy management end-user decision support systems. I discuss the importance of engagement with stakeholders and understanding their decision support needs in successfully increasing the uptake of Earth observation products for societal benefit. Several case studies are presented, demonstrating the importance of providing data sets in formats and units familiar and immediately usable by decision makers. These projects show the utility of Earth observations to enhance renewable energy resource assessment in the developing world, forecast space-weather impacts on the power grid, and improve energy efficiency in the built environment.

  10. Mid-crustal detachment and ramp faulting in the Markham Valley, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Stevens, C.; McCaffrey, R.; Silver, E. A.; Sombo, Z.; English, P.; van der Kevie, J.

    1998-09-01

    Earthquakes and geodetic evidence reveal the presence of a low-angle, mid-crustal detachment fault beneath the Finisterre Range that connects to a steep ramp surfacing near the Ramu-Markham Valley of Papua New Guinea. Waveforms of three large (Mw 6.3 to 6.9) thrust earthquakes that occurred in October 1993 beneath the Finisterre Range 10 to 30 km north of the valley reveal 15° north-dipping thrusts at about 20 km depth. Global Positioning System measurements show up to 20 cm of coseismic slip occurred across the valley, requiring that the active fault extend to within a few hundred meters of the Earth's surface beneath the Markham Valley. Together, these data imply that a gently north-dipping thrust fault in the middle or lower crust beneath the Finisterre Range steepens and shallows southward, forming a ramp fault beneath the north side of the Markham Valley. Waveforms indicate that both the ramp and detachment fault were active during at least one of the earthquakes. While the seismic potential of mid-crustal detachments elsewhere is debated, in Papua New Guinea the detachment fault shows the capability of producing large earthquakes.

  11. Integrating Near Fault Observatories (NFO) for EPOS Implementation Phase

    NASA Astrophysics Data System (ADS)

    Chiaraluce, Lauro

    2015-04-01

    Following the European Plate Observing System (EPOS) project vision aimed at creating a pan-European infrastructure for Earth sciences to support science for a more sustainable society, we are working on the integration of Near-Fault Observatories (NFOs). NFOs are state of the art research infrastructures consisting of advanced networks of multi-parametric sensors continuously monitoring the chemical and physical processes related to the common underlying earth instabilities governing active faults evolution and the genesis of earthquakes. Such a methodological approach, currently applicable only at the local scale (areas of tens to few hundreds of kilometres), is based on extremely dense networks and less common instruments deserving an extraordinary work on data quality control and multi-parameter data description. These networks in fact usually complement regional seismic and geodetic networks (typically with station spacing of 50-100km) with high-density distributions of seismic, geodetic, geochemical and geophysical sensors located typically within 10-20 km of active faults where large earthquakes are expected in the future. In the initial phase of EPOS-IP, seven NFO nodes will be linked: the Alto Tiberina and Irpinia Observatories in Italy, the Corinth Observatory in Greece, the South-Iceland Seismic Zone, the Valais Observatory in Switzerland, Marmara Sea GEO Supersite in Turkey (EU MARSite) and the Vrancea Observatory in Romania. Our work is aimed at establishing standards and integration within this first core group of NFOs while other NFOs are expected to be installed in the next years adopting the standards established and developed within the EPOS Thematic Core Services (TCS). The goal of our group is to build upon the initial development supported by these few key national observatories coordinated under previous EU projects (NERA and REAKT), inclusive and harmonised TCS supporting the installation over the next decade of tens of near-fault

  12. Using EarthScope Construction of the Plate Boundary Observatory to Provide Locally Based Experiential Education and Outreach

    NASA Astrophysics Data System (ADS)

    Jackson, M.; Eriksson, S.; Barbour, K.; Venator, S.; Mencin, D.; Prescott, W.

    2006-12-01

    region to study the area between the San Andreas Fault and the San Jacinto Fault. The event provided an opportunity for the Pathfinder Ranch to unveil the instruments and describe the important science behind the project to the school's students, staff, and board members. The two strainmeters will be used as a teaching tool for several years as hundreds of students filter through Pathfinder school. UNAVCO sponsors a summer PBO Student Field Assistant Program designed to give students from a variety of educational backgrounds the opportunity get involved in the construction of the EarthScope PBO project. The goal of the program is to excite students about the geodetic sciences through direct work experience. Over the summers of 2005 and 2006, PBO sponsored a total of 11 student assistants who helped to install GPS and strainmeter stations and to perform operations and maintenance tasks. PBO plans to expand this program in 2007 by including student assistants in our data management and strainmeter data processing activities. In August, 2006, UNAVCO led a group of scientists, teachers, and curriculum developers to identify key scientific concepts of EarthScope research and how they can be translated into the Earth Science classroom at the middle and high school levels. The focus was on the Cascadia region. A feature of the workshop was to use PBO and USArray data in the classroom.

  13. Failure mode effect analysis and fault tree analysis as a combined methodology in risk management

    NASA Astrophysics Data System (ADS)

    Wessiani, N. A.; Yoshio, F.

    2018-04-01

    There have been many studies reported the implementation of Failure Mode Effect Analysis (FMEA) and Fault Tree Analysis (FTA) as a method in risk management. However, most of the studies usually only choose one of these two methods in their risk management methodology. On the other side, combining these two methods will reduce the drawbacks of each methods when implemented separately. This paper aims to combine the methodology of FMEA and FTA in assessing risk. A case study in the metal company will illustrate how this methodology can be implemented. In the case study, this combined methodology will assess the internal risks that occur in the production process. Further, those internal risks should be mitigated based on their level of risks.

  14. Earth Observatory Satellite system definition study. Report no. 4: Management approach recommendations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A management approach for the Earth Observatory Satellite (EOS) which will meet the challenge of a constrained cost environment is presented. Areas of consideration are contracting techniques, test philosophy, reliability and quality assurance requirements, commonality options, and documentation and control requirements. The various functional areas which were examined for cost reduction possibilities are identified. The recommended management approach is developed to show the primary and alternative methods.

  15. ARGES: an Expert System for Fault Diagnosis Within Space-Based ECLS Systems

    NASA Technical Reports Server (NTRS)

    Pachura, David W.; Suleiman, Salem A.; Mendler, Andrew P.

    1988-01-01

    ARGES (Atmospheric Revitalization Group Expert System) is a demonstration prototype expert system for fault management for the Solid Amine, Water Desorbed (SAWD) CO2 removal assembly, associated with the Environmental Control and Life Support (ECLS) System. ARGES monitors and reduces data in real time from either the SAWD controller or a simulation of the SAWD assembly. It can detect gradual degradations or predict failures. This allows graceful shutdown and scheduled maintenance, which reduces crew maintenance overhead. Status and fault information is presented in a user interface that simulates what would be seen by a crewperson. The user interface employs animated color graphics and an object oriented approach to provide detailed status information, fault identification, and explanation of reasoning in a rapidly assimulated manner. In addition, ARGES recommends possible courses of action for predicted and actual faults. ARGES is seen as a forerunner of AI-based fault management systems for manned space systems.

  16. Active Fault Topography and Fault Outcrops in the Central Part of the Nukumi fault, the 1891 Nobi Earthquake Fault System, Central Japan

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Ueta, K.; Inoue, D.; Aoyagi, Y.; Yanagida, M.; Ichikawa, K.; Goto, N.

    2010-12-01

    It is important to evaluate the magnitude of earthquake caused by multiple active faults, taking into account the simultaneous effects. The simultaneity of adjacent active faults are often decided on the basis of geometric distances except for known these paleoseismic records. We have been studied the step area between the Nukumi fault and the Neodani fault, which appeared as consecutive ruptures in the 1891 Nobi earthquake, since 2009. The purpose of this study is to establish innovation in valuation technique of the simultaneity of adjacent active faults in addition to the paleoseismic record and the geometric distance. Geomorphological, geological and reconnaissance microearthquake surveys are concluded. The present work is intended to clarify the distribution of tectonic geomorphology along the Nukumi fault and the Neodani fault by high-resolution interpretations of airborne LiDAR DEM and aerial photograph, and the field survey of outcrops and location survey. The study area of this work is the southeastern Nukumi fault and the northwestern Neodani fault. We interpret DEM using shaded relief map and stereoscopic bird's-eye view made from 2m mesh DEM data which is obtained by airborne laser scanner of Kokusai Kogyo Co., Ltd. Aerial photographic survey is for confirmation of DEM interpretation using 1/16,000 scale photo. As a result of topographic survey, we found consecutive tectonic topography which is left lateral displacement of ridge and valley lines and reverse scarplets along the Nukumi fault and the Neodani fault . From Ogotani 2km southeastern of Nukumi pass which is located at the southeastern end of surface rupture along the Nukumi fault by previous study to Neooppa 9km southeastern of Nukumi pass, we can interpret left lateral topographies and small uphill-facing fault scarps on the terrace surface by detail DEM investigation. These topographies are unrecognized by aerial photographic survey because of heavy vegetation. We have found several new

  17. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  18. Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications

    NASA Astrophysics Data System (ADS)

    Nasir, Ali

    Spacecraft operate in a harsh environment, are costly to launch, and experience unavoidable communication delay and bandwidth constraints. These factors motivate the need for effective onboard mission and fault management. This dissertation presents an integrated framework to optimize science goal achievement while identifying and managing encountered faults. Goal-related tasks are defined by pointing the spacecraft instrumentation toward distant targets of scientific interest. The relative value of science data collection is traded with risk of failures to determine an optimal policy for mission execution. Our major innovation in fault detection and reconfiguration is to incorporate fault information obtained from two types of spacecraft models: one based on the dynamics of the spacecraft and the second based on the internal composition of the spacecraft. For fault reconfiguration, we consider possible changes in both dynamics-based control law configuration and the composition-based switching configuration. We formulate our problem as a stochastic sequential decision problem or Markov Decision Process (MDP). To avoid the computational complexity involved in a fully-integrated MDP, we decompose our problem into multiple MDPs. These MDPs include planning MDPs for different fault scenarios, a fault detection MDP based on a logic-based model of spacecraft component and system functionality, an MDP for resolving conflicts between fault information from the logic-based model and the dynamics-based spacecraft models" and the reconfiguration MDP that generates a policy optimized over the relative importance of the mission objectives versus spacecraft safety. Approximate Dynamic Programming (ADP) methods for the decomposition of the planning and fault detection MDPs are applied. To show the performance of the MDP-based frameworks and ADP methods, a suite of spacecraft attitude planning case studies are described. These case studies are used to analyze the content and

  19. Increasing the Use of Earth Science Data and Models in Air Quality Management.

    PubMed

    Milford, Jana B; Knight, Daniel

    2017-04-01

    In 2010, the U.S. National Aeronautics and Space Administration (NASA) initiated the Air Quality Applied Science Team (AQAST) as a 5-year, $17.5-million award with 19 principal investigators. AQAST aims to increase the use of Earth science products in air quality-related research and to help meet air quality managers' information needs. We conducted a Web-based survey and a limited number of follow-up interviews to investigate federal, state, tribal, and local air quality managers' perspectives on usefulness of Earth science data and models, and on the impact AQAST has had. The air quality managers we surveyed identified meeting the National Ambient Air Quality Standards for ozone and particulate matter, emissions from mobile sources, and interstate air pollution transport as top challenges in need of improved information. Most survey respondents viewed inadequate coverage or frequency of satellite observations, data uncertainty, and lack of staff time or resources as barriers to increased use of satellite data by their organizations. Managers who have been involved with AQAST indicated that the program has helped build awareness of NASA Earth science products, and assisted their organizations with retrieval and interpretation of satellite data and with application of global chemistry and climate models. AQAST has also helped build a network between researchers and air quality managers with potential for further collaborations. NASA's Air Quality Applied Science Team (AQAST) aims to increase the use of satellite data and global chemistry and climate models for air quality management purposes, by supporting research and tool development projects of interest to both groups. Our survey and interviews of air quality managers indicate they found value in many AQAST projects and particularly appreciated the connections to the research community that the program facilitated. Managers expressed interest in receiving continued support for their organizations' use of

  20. Comparison of fault-related folding algorithms to restore a fold-and-thrust-belt

    NASA Astrophysics Data System (ADS)

    Brandes, Christian; Tanner, David

    2017-04-01

    Fault-related folding means the contemporaneous evolution of folds as a consequence of fault movement. It is a common deformation process in the upper crust that occurs worldwide in accretionary wedges, fold-and-thrust belts, and intra-plate settings, in either strike-slip, compressional, or extensional regimes. Over the last 30 years different algorithms have been developed to simulate the kinematic evolution of fault-related folds. All these models of fault-related folding include similar simplifications and limitations and use the same kinematic behaviour throughout the model (Brandes & Tanner, 2014). We used a natural example of fault-related folding from the Limón fold-and-thrust belt in eastern Costa Rica to test two different algorithms and to compare the resulting geometries. A thrust fault and its hanging-wall anticline were restored using both the trishear method (Allmendinger, 1998; Zehnder & Allmendinger, 2000) and the fault-parallel flow approach (Ziesch et al. 2014); both methods are widely used in academia and industry. The resulting hanging-wall folds above the thrust fault are restored in substantially different fashions. This is largely a function of the propagation-to-slip ratio of the thrust, which controls the geometry of the related anticline. Understanding the controlling factors for anticline evolution is important for the evaluation of potential hydrocarbon reservoirs and the characterization of fault processes. References: Allmendinger, R.W., 1998. Inverse and forward numerical modeling of trishear fault propagation folds. Tectonics, 17, 640-656. Brandes, C., Tanner, D.C. 2014. Fault-related folding: a review of kinematic models and their application. Earth Science Reviews, 138, 352-370. Zehnder, A.T., Allmendinger, R.W., 2000. Velocity field for the trishear model. Journal of Structural Geology, 22, 1009-1014. Ziesch, J., Tanner, D.C., Krawczyk, C.M. 2014. Strain associated with the fault-parallel flow algorithm during kinematic fault

  1. Topographic expression of active faults in the foothills of the Northern Apennines

    NASA Astrophysics Data System (ADS)

    Picotti, Vincenzo; Ponza, Alessio; Pazzaglia, Frank J.

    2009-09-01

    Active faults that rupture the earth's surface leave an imprint on the topography that is recognized using a combination of geomorphic and geologic metrics including triangular facets, the shape of mountain fronts, the drainage network, and incised river valleys with inset terraces. We document the presence of a network of active, high-angle extensional faults, collectively embedded in the actively shortening mountain front of the Northern Apennines, that possess unique geomorphic expressions. We measure the strain rate for these structures and find that they have a constant throw-to-length ratio. We demonstrate the necessary and sufficient conditions for triangular facet development in the footwalls of these faults and argue that rock-type exerts the strongest control. The slip rates of these faults range from 0.1 to 0.3 mm/yr, which is similar to the average rate of river incision and mountain front unroofing determined by corollary studies. The faults are a near-surface manifestation of deeper crustal processes that are actively uplifting rocks and growing topography at a rate commensurate with surface processes that are eroding the mountain front to base level.

  2. Depending on Partnerships to Manage NASA's Earth Science Data

    NASA Astrophysics Data System (ADS)

    Behnke, J.; Lindsay, F. E.; Lowe, D. R.

    2015-12-01

    increase in user demand that has occurred over the past 15 years. We will present how the EOSDIS has relies on partnerships to support the challenges of managing NASA's Earth Science data.

  3. A Unified Nonlinear Adaptive Approach for Detection and Isolation of Engine Faults

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong; Farfan-Ramos, Luis; Simon, Donald L.

    2010-01-01

    A challenging problem in aircraft engine health management (EHM) system development is to detect and isolate faults in system components (i.e., compressor, turbine), actuators, and sensors. Existing nonlinear EHM methods often deal with component faults, actuator faults, and sensor faults separately, which may potentially lead to incorrect diagnostic decisions and unnecessary maintenance. Therefore, it would be ideal to address sensor faults, actuator faults, and component faults under one unified framework. This paper presents a systematic and unified nonlinear adaptive framework for detecting and isolating sensor faults, actuator faults, and component faults for aircraft engines. The fault detection and isolation (FDI) architecture consists of a parallel bank of nonlinear adaptive estimators. Adaptive thresholds are appropriately designed such that, in the presence of a particular fault, all components of the residual generated by the adaptive estimator corresponding to the actual fault type remain below their thresholds. If the faults are sufficiently different, then at least one component of the residual generated by each remaining adaptive estimator should exceed its threshold. Therefore, based on the specific response of the residuals, sensor faults, actuator faults, and component faults can be isolated. The effectiveness of the approach was evaluated using the NASA C-MAPSS turbofan engine model, and simulation results are presented.

  4. Policy Document on Earth Observation for Urban Planning and Management: State of the Art and Recommendations for Application of Earth Observation in Urban Planning

    NASA Technical Reports Server (NTRS)

    Nichol, Janet; King, Bruce; Xiaoli, Ding; Dowman, Ian; Quattrochi, Dale; Ehlers, Manfred

    2007-01-01

    A policy document on earth observation for urban planning and management resulting from a workshop held in Hong Kong in November 2006 is presented. The aim of the workshop was to provide a forum for researchers and scientists specializing in earth observation to interact with practitioners working in different aspects of city planning, in a complex and dynamic city, Hong Kong. A summary of the current state of the art, limitations, and recommendations for the use of earth observation in urban areas is presented here as a policy document.

  5. National Aeronautics and Space Administration (NASA) Earth Science Research for Energy Management. Part 1; Overview of Energy Issues and an Assessment of the Potential for Application of NASA Earth Science Research

    NASA Technical Reports Server (NTRS)

    Zell, E.; Engel-Cox, J.

    2005-01-01

    Effective management of energy resources is critical for the U.S. economy, the environment, and, more broadly, for sustainable development and alleviating poverty worldwide. The scope of energy management is broad, ranging from energy production and end use to emissions monitoring and mitigation and long-term planning. Given the extensive NASA Earth science research on energy and related weather and climate-related parameters, and rapidly advancing energy technologies and applications, there is great potential for increased application of NASA Earth science research to selected energy management issues and decision support tools. The NASA Energy Management Program Element is already involved in a number of projects applying NASA Earth science research to energy management issues, with a focus on solar and wind renewable energy and developing interests in energy modeling, short-term load forecasting, energy efficient building design, and biomass production.

  6. San Andreas-sized Strike-slip Fault on Europa

    NASA Technical Reports Server (NTRS)

    1998-01-01

    opens the fault and subsequent tidal stress causes it to move lengthwise in one direction. Then tidal forces close the fault again, preventing the area from moving back to its original position. Daily tidal cycles produce a steady accumulation of lengthwise offset motions. Here on Earth, unlike Europa, large strike-slip faults like the San Andreas are set in motion by plate tectonic forces.

    North is to the top of the picture and the sun illuminates the surface from the top. The image, centered at 66 degrees south latitude and 195 degrees west longitude, covers an area approximately 300 by 203 kilometers(185 by 125 miles). The pictures were taken on September 26, 1998by Galileo's solid-state imaging system.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  7. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, L.; Li, H.; Brodsky, E. E.; Wang, H.; Pei, J.

    2012-12-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (˜200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ˜30°, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  8. Necessity of using heterogeneous ellipsoidal Earth model with terrain to calculate co-seismic effect

    NASA Astrophysics Data System (ADS)

    Cheng, Huihong; Zhang, Bei; Zhang, Huai; Huang, Luyuan; Qu, Wulin; Shi, Yaolin

    2016-04-01

    Co-seismic deformation and stress changes, which reflect the elasticity of the earth, are very important in the earthquake dynamics, and also to other issues, such as the evaluation of the seismic risk, fracture process and triggering of earthquake. Lots of scholars have researched the dislocation theory and co-seismic deformation and obtained the half-space homogeneous model, half-space stratified model, spherical stratified model, and so on. Especially, models of Okada (1992) and Wang (2003, 2006) are widely applied in the research of calculating co-seismic and post-seismic effects. However, since both semi-infinite space model and layered model do not take the role of the earth curvature or heterogeneity or topography into consideration, there are large errors in calculating the co-seismic displacement of a great earthquake in its impacted area. Meanwhile, the computational methods of calculating the co-seismic strain and stress are different between spherical model and plane model. Here, we adopted the finite element method which could well deal with the complex characteristics (such as anisotropy, discontinuities) of rock and different conditions. We use the mash adaptive technique to automatically encrypt the mesh at the fault and adopt the equivalent volume force replace the dislocation source, which can avoid the difficulty in handling discontinuity surface with conventional (Zhang et al., 2015). We constructed an earth model that included earth's layered structure and curvature, the upper boundary was set as a free surface and the core-mantle boundary was set under buoyancy forces. Firstly, based on the precision requirement, we take a testing model - - a strike-slip fault (the length of fault is 500km and the width is 50km, and the slippage is 10m) for example. Because of the curvature of the Earth, some errors certainly occur in plane coordinates just as previous studies (Dong et al., 2014; Sun et al., 2012). However, we also found that: 1) the co

  9. Program on Earth Observation Data Management Systems (EODMS)

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Gohagan, J. K.; Hill, C. T.; Morgan, R. P.; Hays, T. R.; Ballard, R. J.; Crnkovick, G. R.; Schaeffer, M. A.

    1976-01-01

    An assessment was made of the needs of a group of potential users of satellite remotely sensed data (state, regional, and local agencies) involved in natural resources management in five states, and alternative data management systems to satisfy these needs are outlined. Tasks described include: (1) a comprehensive data needs analysis of state and local users; (2) the design of remote sensing-derivable information products that serve priority state and local data needs; (3) a cost and performance analysis of alternative processing centers for producing these products; (4) an assessment of the impacts of policy, regulation and government structure on implementing large-scale use of remote sensing technology in this community of users; and (5) the elaboration of alternative institutional arrangements for operational Earth Observation Data Management Systems (EODMS). It is concluded that an operational EODMS will be of most use to state, regional, and local agencies if it provides a full range of information services -- from raw data acquisition to interpretation and dissemination of final information products.

  10. Conditions of Fissuring in a Pumped-Faulted Aquifer System

    NASA Astrophysics Data System (ADS)

    Hernandez-Marin, M.; Burbey, T. J.

    2007-12-01

    Earth fissuring associated with subsidence from groundwater pumping is problematic in many arid-zone heavily pumped basins such as Las Vegas Valley. Long-term pumping at rates considerably greater than the natural recharge rate has stressed the heterogeneous aquifer system resulting in a complex stress-strain regime. A rigorous artificial recharge program coupled with increased surface-water importation has allowed water levels to appreciably recover, which has led to surface rebound in some localities. Nonetheless, new fissures continue to appear, particularly near basin-fill faults that behave as barriers to subsidence bowls. The purpose of this research is to develop a series of computational models to better understand the influence that structure (faults), pumping, and hydrostratigraphy has in the generation and propagation of fissures. The hydrostratigraphy of Las Vegas Valley consists of aquifers, aquitards and a relatively dry vadoze zone that may be as thick as 100m in much of the valley. Quaternary faults are typically depicted as scarps resulting from pre- pumping extensional tectonic events and are probably not responsible for the observed strain. The models developed to simulate the stress-strain and deformation processes in a faulted pumped aquifer-aquitard system of Las Vegas use the ABAQUS CAE (Complete ABAQUS Environment) software system. ABAQUS is a sophisticated engineering industry finite-element modeling package capable of simulating the complex fault- fissure system described here. A brittle failure criteria based on the tensile strength of the materials and the acting stresses (from previous models) are being used to understand how and where fissures are likely to form. , Hypothetical simulations include the role that faults and the vadose zone may play in fissure formation

  11. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    ERIC Educational Resources Information Center

    Haddad, David Elias

    2014-01-01

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that…

  12. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  13. High-Resolution Seismic Reflection Imaging of the Reelfoot Fault, New Madrid, Missouri

    NASA Astrophysics Data System (ADS)

    Rosandich, B.; Harris, J. B.; Woolery, E. W.

    2017-12-01

    Earthquakes in the Lower Mississippi Valley are mainly concentrated in the New Madrid Seismic Zone and are associated with reactivated faults of the Reelfoot Rift. Determining the relationship between the seismogenic faults (in crystalline basement rocks) and deformation at the Earth's surface and in the shallow subsurface has remained an active research topic for decades. An integrated seismic data set, including compressional (P-) wave and shear (S-) wave seismic reflection profiles, was collected in New Madrid, Missouri, across the "New Madrid" segment of the Reelfoot Fault, whose most significant rupture produced the M 7.5, February 7, 1812, New Madrid earthquake. The seismic reflection profiles (215 m long) were centered on the updip projection of the fault, which is associated with a surface drainage feature (Des Cyprie Slough) located at the base of a prominent east-facing escarpment. The seismic reflection profiles were collected using 48-channel (P-wave) and 24-channel (S-wave) towable landsteamer acquisition equipment. Seismic energy was generated by five vertical impacts of a 1.8-kg sledgehammer on a small aluminum plate for the P-wave data and five horizontal impacts of the sledgehammer on a 10-kg steel I-beam for the S-wave data. Interpretation of the profiles shows a west-dipping reverse fault (Reelfoot Fault) that propagates upward from Paleozoic sedimentary rocks (>500 m deep) to near-surface Quaternary sediments (<10 m deep). The hanging wall of the fault is anticlinally folded, a structural setting almost identical to that imaged on the Kentucky Bend and Reelfoot Lake segments (of the Reelfoot Fault) to the south.

  14. Strain-dependent Damage Evolution and Velocity Reduction in Fault Zones Induced by Earthquake Rupture

    NASA Astrophysics Data System (ADS)

    Zhong, J.; Duan, B.

    2009-12-01

    Low-velocity fault zones (LVFZs) with reduced seismic velocities relative to the surrounding wall rocks are widely observed around active faults. The presence of such a zone will affect rupture propagation, near-field ground motion, and off-fault damage in subsequent earth-quakes. In this study, we quantify the reduction of seismic velocities caused by dynamic rup-ture on a 2D planar fault surrounded by a low-velocity fault zone. First, we implement the damage rheology (Lyakhovsky et al. 1997) in EQdyna (Duan and Oglesby 2006), an explicit dynamic finite element code. We further extend this damage rheology model to include the dependence of strains on crack density. Then, we quantify off-fault continuum damage distribution and velocity reduction induced by earthquake rupture with the presence of a preexisting LVFZ. We find that the presence of a LVFZ affects the tempo-spatial distribu-tions of off-fault damage. Because lack of constraint in some damage parameters, we further investigate the relationship between velocity reduction and these damage prameters by a large suite of numerical simulations. Slip velocity, slip, and near-field ground motions computed from damage rheology are also compared with those from off-fault elastic or elastoplastic responses. We find that the reduction in elastic moduli during dynamic rupture has profound impact on these quantities.

  15. Sharing Earth Observation Data When Health Management

    NASA Astrophysics Data System (ADS)

    Cox, E. L., Jr.

    2015-12-01

    While the global community is struck by pandemics and epidemics from time to time the ability to fully utilize earth observations and integrate environmental information has been limited - until recently. Mature science understanding is allowing new levels of situational awareness be possible when and if the relevant data is available and shared in a timely and useable manner. Satellite and other remote sensing tools have been used to observe, monitor, assess and predict weather and water impacts for decades. In the last few years much of this has included a focus on the ability to monitor changes on climate scales that suggest changes in quantity and quality of ecosystem resources or the "one-health" approach where trans-disciplinary links between environment, animal and vegetative health may provide indications of best ways to manage susceptibility to infectious disease or outbreaks. But the scale of impacts and availability of information from earth observing satellites, airborne platforms, health tracking systems and surveillance networks offer new integrated tools. This presentation will describe several recent events, such as Superstorm Sandy in the United States and the Ebola outbreak in Africa, where public health and health infrastructure have been exposed to environmental hazards and lessons learned from disaster response in the ability to share data have been effective in risk reduction.

  16. Geomorphic expression of strike-slip faults: field observations vs. analog experiments: preliminary results

    NASA Astrophysics Data System (ADS)

    Hsieh, S. Y.; Neubauer, F.; Genser, J.

    2012-04-01

    The aim of this project is to study the surface expression of strike-slip faults with main aim to find rules how these structures can be extrapolated to depth. In the first step, several basic properties of the fault architecture are in focus: (1) Is it possible to define the fault architecture by studying surface structures of the damage zone vs. the fault core, particularly the width of the damage zone? (2) Which second order structures define the damage zone of strike-slip faults, and how relate these to such reported in basement fault strike-slip analog experiments? (3) Beside classical fault bend structures, is there a systematic along-strike variation of the damage zone width and to which properties relates the variation of the damage zone width. We study the above mentioned properties on the dextral Altyn fault, which is one of the largest strike-slip on Earth with the advantage to have developed in a fully arid climate. The Altyn fault includes a ca. 250 to 600 m wide fault valley, usually with the trace of actual fault in its center. The fault valley is confined by basement highs, from which alluvial fans develop towards the center of the fault valley. The active fault trace is marked by small scale pressure ridges and offset of alluvial fans. The fault valley confining basement highs are several kilometer long and ca. 0.5 to 1 km wide and confined by rotated dextral anti-Riedel faults and internally structured by a regular fracture pattern. Dextral anti-Riedel faults are often cut by Riedel faults. Consequently, the Altyn fault comprises a several km wide damage zone. The fault core zone is a barrier to fluid flow, and the few springs of the region are located on the margin of the fault valley implying the fractured basement highs as the reservoir. Consequently, the southern Silk Road was using the Altyn fault valley. The preliminary data show that two or more orders of structures exist. Small-scale develop during a single earthquake. These finally

  17. Fault finder

    DOEpatents

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  18. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  19. Models meet data: Challenges and opportunities in implementing land management in Earth system models.

    PubMed

    Pongratz, Julia; Dolman, Han; Don, Axel; Erb, Karl-Heinz; Fuchs, Richard; Herold, Martin; Jones, Chris; Kuemmerle, Tobias; Luyssaert, Sebastiaan; Meyfroidt, Patrick; Naudts, Kim

    2018-04-01

    As the applications of Earth system models (ESMs) move from general climate projections toward questions of mitigation and adaptation, the inclusion of land management practices in these models becomes crucial. We carried out a survey among modeling groups to show an evolution from models able only to deal with land-cover change to more sophisticated approaches that allow also for the partial integration of land management changes. For the longer term a comprehensive land management representation can be anticipated for all major models. To guide the prioritization of implementation, we evaluate ten land management practices-forestry harvest, tree species selection, grazing and mowing harvest, crop harvest, crop species selection, irrigation, wetland drainage, fertilization, tillage, and fire-for (1) their importance on the Earth system, (2) the possibility of implementing them in state-of-the-art ESMs, and (3) availability of required input data. Matching these criteria, we identify "low-hanging fruits" for the inclusion in ESMs, such as basic implementations of crop and forestry harvest and fertilization. We also identify research requirements for specific communities to address the remaining land management practices. Data availability severely hampers modeling the most extensive land management practice, grazing and mowing harvest, and is a limiting factor for a comprehensive implementation of most other practices. Inadequate process understanding hampers even a basic assessment of crop species selection and tillage effects. The need for multiple advanced model structures will be the challenge for a comprehensive implementation of most practices but considerable synergy can be gained using the same structures for different practices. A continuous and closer collaboration of the modeling, Earth observation, and land system science communities is thus required to achieve the inclusion of land management in ESMs. © 2017 John Wiley & Sons Ltd.

  20. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    NASA Astrophysics Data System (ADS)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  1. Digital release of the Alaska Quaternary fault and fold database

    NASA Astrophysics Data System (ADS)

    Koehler, R. D.; Farrell, R.; Burns, P.; Combellick, R. A.; Weakland, J. R.

    2011-12-01

    The Alaska Division of Geological & Geophysical Surveys (DGGS) has designed a Quaternary fault and fold database for Alaska in conformance with standards defined by the U.S. Geological Survey for the National Quaternary fault and fold database. Alaska is the most seismically active region of the United States, however little information exists on the location, style of deformation, and slip rates of Quaternary faults. Thus, to provide an accurate, user-friendly, reference-based fault inventory to the public, we are producing a digital GIS shapefile of Quaternary fault traces and compiling summary information on each fault. Here, we present relevant information pertaining to the digital GIS shape file and online access and availability of the Alaska database. This database will be useful for engineering geologic studies, geologic, geodetic, and seismic research, and policy planning. The data will also contribute to the fault source database being constructed by the Global Earthquake Model (GEM), Faulted Earth project, which is developing tools to better assess earthquake risk. We derived the initial list of Quaternary active structures from The Neotectonic Map of Alaska (Plafker et al., 1994) and supplemented it with more recent data where available. Due to the limited level of knowledge on Quaternary faults in Alaska, pre-Quaternary fault traces from the Plafker map are shown as a layer in our digital database so users may view a more accurate distribution of mapped faults and to suggest the possibility that some older traces may be active yet un-studied. The database will be updated as new information is developed. We selected each fault by reviewing the literature and georegistered the faults from 1:250,000-scale paper maps contained in 1970's vintage and earlier bedrock maps. However, paper map scales range from 1:20,000 to 1:500,000. Fault parameters in our GIS fault attribute tables include fault name, age, slip rate, slip sense, dip direction, fault line type

  2. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  3. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  4. Fault zone hydrogeology

    NASA Astrophysics Data System (ADS)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (< 1 km) introduces permeability heterogeneity and anisotropy, which has an important impact on processes such as regional groundwater flow, hydrocarbon migration, and hydrothermal fluid circulation. Fault zones have the capacity to be hydraulic conduits connecting shallow and deep geological environments, but simultaneously the fault cores of many faults often form effective barriers to flow. The direct evaluation of the impact of faults to fluid flow patterns remains a challenge and requires a multidisciplinary research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and

  5. Geomorphic and Structural Evidence for Rolling Hinge Style Deformation in the Footwall of an Active Low Angle Normal Fault, Mai'iu Fault, Woodlark Rift, SE Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Mizera, M.; Little, T.; Norton, K. P.; Webber, S.; Ellis, S. M.; Oesterle, J.

    2016-12-01

    While shown to operate in oceanic crust, rolling hinge style deformation remains a debated process in metamorpic core complexes (MCCs) in the continents. The model predicts that unloading and isostatic uplift during slip causes a progressive back-tilting in the upper crust of a normal fault that is more steeply dipping at depth. The Mai'iu Fault in the Woodlark Rift, SE Papua New Guinea, is one of the best-exposed and fastest slipping (probably >7 mm/yr) active low-angle normal faults (LANFs) on Earth. We analysed structural field data from this fault's exhumed slip surface and footwall, together with geomorphic data interpreted from aerial photographs and GeoSAR-derived digital elevation models (gridded at 5-30 m spacing), to evaluate deformational processes affecting the rapidly exhuming, domal-shaped detachment fault. The exhumed fault surface emerges from the ground at the rangefront near sea level with a northward dip of 21°. Up-dip, it is well-preserved, smooth and corrugated, with some fault remnants extending at least 29 km in the slip direction. The surface flattens over the crest of the dome, beyond where it dips S at up to 15°. Windgaps perched on the crestal main divide of the dome, indicate both up-dip tectonic advection and progressive back-tilting of the exhuming fault surface. We infer that slip on a serial array of m-to-km scale up-to-the-north, steeply S-dipping ( 75°) antithetic-sense normal faults accommodated some of the exhumation-related, inelastic bending of the footwall. These geomorphically well expressed faults strike parallel to the main Mai'iu fault at 110.9±5°, have a mean cross-strike spacing of 1520 m, and slip with a consistent up-to-the-north sense of throw ranging from <5 m to 120 m. Apparently the Mai'iu Fault was able to continue slipping despite having to negotiate this added fault-roughness. We interpret the antithetic faulting to result from bending stresses, and to provide the first clear examples of rolling hinge

  6. Interactions between Polygonal Normal Faults and Larger Normal Faults, Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Pham, T. Q. H.; Withjack, M. O.; Hanafi, B. R.

    2017-12-01

    Polygonal faults, small normal faults with polygonal arrangements that form in fine-grained sedimentary rocks, can influence ground-water flow and hydrocarbon migration. Using well and 3D seismic-reflection data, we have examined the interactions between polygonal faults and larger normal faults on the passive margin of offshore Nova Scotia, Canada. The larger normal faults strike approximately E-W to NE-SW. Growth strata indicate that the larger normal faults were active in the Late Cretaceous (i.e., during the deposition of the Wyandot Formation) and during the Cenozoic. The polygonal faults were also active during the Cenozoic because they affect the top of the Wyandot Formation, a fine-grained carbonate sedimentary rock, and the overlying Cenozoic strata. Thus, the larger normal faults and the polygonal faults were both active during the Cenozoic. The polygonal faults far from the larger normal faults have a wide range of orientations. Near the larger normal faults, however, most polygonal faults have preferred orientations, either striking parallel or perpendicular to the larger normal faults. Some polygonal faults nucleated at the tip of a larger normal fault, propagated outward, and linked with a second larger normal fault. The strike of these polygonal faults changed as they propagated outward, ranging from parallel to the strike of the original larger normal fault to orthogonal to the strike of the second larger normal fault. These polygonal faults hard-linked the larger normal faults at and above the level of the Wyandot Formation but not below it. We argue that the larger normal faults created stress-enhancement and stress-reorientation zones for the polygonal faults. Numerous small, polygonal faults formed in the stress-enhancement zones near the tips of larger normal faults. Stress-reorientation zones surrounded the larger normal faults far from their tips. Fewer polygonal faults are present in these zones, and, more importantly, most polygonal faults

  7. Enhancing Earth Observation and Modeling for Tsunami Disaster Response and Management

    NASA Astrophysics Data System (ADS)

    Koshimura, Shunichi; Post, Joachim

    2017-04-01

    In the aftermath of catastrophic natural disasters, such as earthquakes and tsunamis, our society has experienced significant difficulties in assessing disaster impact in the limited amount of time. In recent years, the quality of satellite sensors and access to and use of satellite imagery and services has greatly improved. More and more space agencies have embraced data-sharing policies that facilitate access to archived and up-to-date imagery. Tremendous progress has been achieved through the continuous development of powerful algorithms and software packages to manage and process geospatial data and to disseminate imagery and geospatial datasets in near-real time via geo-web-services, which can be used in disaster-risk management and emergency response efforts. Satellite Earth observations now offer consistent coverage and scope to provide a synoptic overview of large areas, repeated regularly. These can be used to compare risk across different countries, day and night, in all weather conditions, and in trans-boundary areas. On the other hand, with use of modern computing power and advanced sensor networks, the great advances of real-time simulation have been achieved. The data and information derived from satellite Earth observations, integrated with in situ information and simulation modeling provides unique value and the necessary complement to socio-economic data. Emphasis also needs to be placed on ensuring space-based data and information are used in existing and planned national and local disaster risk management systems, together with other data and information sources as a way to strengthen the resilience of communities. Through the case studies of the 2011 Great East Japan earthquake and tsunami disaster, we aim to discuss how earth observations and modeling, in combination with local, in situ data and information sources, can support the decision-making process before, during and after a disaster strikes.

  8. Layered clustering multi-fault diagnosis for hydraulic piston pump

    NASA Astrophysics Data System (ADS)

    Du, Jun; Wang, Shaoping; Zhang, Haiyan

    2013-04-01

    Efficient diagnosis is very important for improving reliability and performance of aircraft hydraulic piston pump, and it is one of the key technologies in prognostic and health management system. In practice, due to harsh working environment and heavy working loads, multiple faults of an aircraft hydraulic pump may occur simultaneously after long time operations. However, most existing diagnosis methods can only distinguish pump faults that occur individually. Therefore, new method needs to be developed to realize effective diagnosis of simultaneous multiple faults on aircraft hydraulic pump. In this paper, a new method based on the layered clustering algorithm is proposed to diagnose multiple faults of an aircraft hydraulic pump that occur simultaneously. The intensive failure mechanism analyses of the five main types of faults are carried out, and based on these analyses the optimal combination and layout of diagnostic sensors is attained. The three layered diagnosis reasoning engine is designed according to the faults' risk priority number and the characteristics of different fault feature extraction methods. The most serious failures are first distinguished with the individual signal processing. To the desultory faults, i.e., swash plate eccentricity and incremental clearance increases between piston and slipper, the clustering diagnosis algorithm based on the statistical average relative power difference (ARPD) is proposed. By effectively enhancing the fault features of these two faults, the ARPDs calculated from vibration signals are employed to complete the hypothesis testing. The ARPDs of the different faults follow different probability distributions. Compared with the classical fast Fourier transform-based spectrum diagnosis method, the experimental results demonstrate that the proposed algorithm can diagnose the multiple faults, which occur synchronously, with higher precision and reliability.

  9. Map and Data for Quaternary Faults and Fault Systems on the Island of Hawai`i

    USGS Publications Warehouse

    Cannon, Eric C.; Burgmann, Roland; Crone, Anthony J.; Machette, Michael N.; Dart, Richard L.

    2007-01-01

    and catalog of data, both in Adobe Acrobat PDF format. The senior authors (Eric C. Cannon and Roland Burgmann) compiled the fault data as part of ongoing studies of active faulting on the Island of Hawai`i. The USGS is responsible for organizing and integrating the State or regional products under their National Seismic Hazard Mapping project, including the coordination and oversight of contributions from individuals and groups (Michael N. Machette and Anthony J. Crone), database design and management (Kathleen M. Haller), and digitization and analysis of map data (Richard L. Dart). After being released an Open-File Report, the data in this report will be available online at http://earthquake.usgs.gov/regional/qfaults/, the USGS Quaternary Fault and Fold Database of the United States.

  10. Fault Diagnosis of Power Systems Using Intelligent Systems

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Oliver, Walter E. , Jr.

    1996-01-01

    The power system operator's need for a reliable power delivery system calls for a real-time or near-real-time Al-based fault diagnosis tool. Such a tool will allow NASA ground controllers to re-establish a normal or near-normal degraded operating state of the EPS (a DC power system) for Space Station Alpha by isolating the faulted branches and loads of the system. And after isolation, re-energizing those branches and loads that have been found not to have any faults in them. A proposed solution involves using the Fault Diagnosis Intelligent System (FDIS) to perform near-real time fault diagnosis of Alpha's EPS by downloading power transient telemetry at fault-time from onboard data loggers. The FDIS uses an ANN clustering algorithm augmented with a wavelet transform feature extractor. This combination enables this system to perform pattern recognition of the power transient signatures to diagnose the fault type and its location down to the orbital replaceable unit. FDIS has been tested using a simulation of the LeRC Testbed Space Station Freedom configuration including the topology from the DDCU's to the electrical loads attached to the TPDU's. FDIS will work in conjunction with the Power Management Load Scheduler to determine what the state of the system was at the time of the fault condition. This information is used to activate the appropriate diagnostic section, and to refine if necessary the solution obtained. In the latter case, if the FDIS reports back that it is equally likely that the faulty device as 'start tracker #1' and 'time generation unit,' then based on a priori knowledge of the system's state, the refined solution would be 'star tracker #1' located in cabinet ITAS2. It is concluded from the present studies that artificial intelligence diagnostic abilities are improved with the addition of the wavelet transform, and that when such a system such as FDIS is coupled to the Power Management Load Scheduler, a faulty device can be located and isolated

  11. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  12. Fault-Tolerant Heat Exchanger

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Crowley, Christopher J.

    2005-01-01

    A compact, lightweight heat exchanger has been designed to be fault-tolerant in the sense that a single-point leak would not cause mixing of heat-transfer fluids. This particular heat exchanger is intended to be part of the temperature-regulation system for habitable modules of the International Space Station and to function with water and ammonia as the heat-transfer fluids. The basic fault-tolerant design is adaptable to other heat-transfer fluids and heat exchangers for applications in which mixing of heat-transfer fluids would pose toxic, explosive, or other hazards: Examples could include fuel/air heat exchangers for thermal management on aircraft, process heat exchangers in the cryogenic industry, and heat exchangers used in chemical processing. The reason this heat exchanger can tolerate a single-point leak is that the heat-transfer fluids are everywhere separated by a vented volume and at least two seals. The combination of fault tolerance, compactness, and light weight is implemented in a unique heat-exchanger core configuration: Each fluid passage is entirely surrounded by a vented region bridged by solid structures through which heat is conducted between the fluids. Precise, proprietary fabrication techniques make it possible to manufacture the vented regions and heat-conducting structures with very small dimensions to obtain a very large coefficient of heat transfer between the two fluids. A large heat-transfer coefficient favors compact design by making it possible to use a relatively small core for a given heat-transfer rate. Calculations and experiments have shown that in most respects, the fault-tolerant heat exchanger can be expected to equal or exceed the performance of the non-fault-tolerant heat exchanger that it is intended to supplant (see table). The only significant disadvantages are a slight weight penalty and a small decrease in the mass-specific heat transfer.

  13. Hydrogeochemistry Characteristics and Daily Variation of Geothermal Water in the Moxi Fault,Southwest of China

    NASA Astrophysics Data System (ADS)

    Qi, Jihong; Xu, Mo; An, Chenjiao; Zhang, Yunhui; Zhang, Qiang

    2017-04-01

    The Xianshuihe Fault with frequent earthquakes activities is the regional deep fault in China. The Moxi Fault is the southern part of the Xianshuihe Fault, where the strong activities of geothermal water could bring abundant information of deep crust. In this article, some typical geothermal springs were collected along the Moxi fault from Kangding to Shimian. Using the the Na-K-Mg equilibrium diagram, it explains the state of water-rock equilibrium, and estimates the reservoir temperature basing appropriate geothermometers. Basing on the relationship between the enthalpy and chlorine concentration of geothermal water, it analyze the mixing progress of thermal water with shallow groundwater. Moreover, the responses of variation of geothermal water to the solid tides are considered to study the hydrothermal activities of this fault. The Guanding in Kangding are considered as the center of the geothermal system, and the hydrothermal activities decrease southward extending. Geothermal water maybe is heated by the deep heat source of the Himalayan granites, while the springs in the south area perform the mixture with thermal water in the sub-reservoir of the Permian crystalline limestone. It improves the research of hydrothermal activities in the Moxi Fault, meanwhile using the variation of geothermal water maybe become a important method to study the environment of deep earth in the future.

  14. Fault tree models for fault tolerant hypercube multiprocessors

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Tuazon, Jezus O.

    1991-01-01

    Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.

  15. Fault Tree Analysis.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Rowe, Theresa A; Abecassis, Zachary A; Apley, Daniel W; Barnard, Cynthia; Holl, Jane L

    The purpose of this study was to use fault tree analysis to evaluate the adequacy of quality reporting programs in identifying root causes of postoperative bloodstream infection (BSI). A systematic review of the literature was used to construct a fault tree to evaluate 3 postoperative BSI reporting programs: National Surgical Quality Improvement Program (NSQIP), Centers for Medicare and Medicaid Services (CMS), and The Joint Commission (JC). The literature review revealed 699 eligible publications, 90 of which were used to create the fault tree containing 105 faults. A total of 14 identified faults are currently mandated for reporting to NSQIP, 5 to CMS, and 3 to JC; 2 or more programs require 4 identified faults. The fault tree identifies numerous contributing faults to postoperative BSI and reveals substantial variation in the requirements and ability of national quality data reporting programs to capture these potential faults. Efforts to prevent postoperative BSI require more comprehensive data collection to identify the root causes and develop high-reliability improvement strategies.

  16. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    NASA Astrophysics Data System (ADS)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  17. A fuzzy decision tree for fault classification.

    PubMed

    Zio, Enrico; Baraldi, Piero; Popescu, Irina C

    2008-02-01

    In plant accident management, the control room operators are required to identify the causes of the accident, based on the different patterns of evolution of the monitored process variables thereby developing. This task is often quite challenging, given the large number of process parameters monitored and the intense emotional states under which it is performed. To aid the operators, various techniques of fault classification have been engineered. An important requirement for their practical application is the physical interpretability of the relationships among the process variables underpinning the fault classification. In this view, the present work propounds a fuzzy approach to fault classification, which relies on fuzzy if-then rules inferred from the clustering of available preclassified signal data, which are then organized in a logical and transparent decision tree structure. The advantages offered by the proposed approach are precisely that a transparent fault classification model is mined out of the signal data and that the underlying physical relationships among the process variables are easily interpretable as linguistic if-then rules that can be explicitly visualized in the decision tree structure. The approach is applied to a case study regarding the classification of simulated faults in the feedwater system of a boiling water reactor.

  18. Buried shallow fault slip from the South Napa earthquake revealed by near-field geodesy.

    PubMed

    Brooks, Benjamin A; Minson, Sarah E; Glennie, Craig L; Nevitt, Johanna M; Dawson, Tim; Rubin, Ron; Ericksen, Todd L; Lockner, David; Hudnut, Kenneth; Langenheim, Victoria; Lutz, Andrew; Mareschal, Maxime; Murray, Jessica; Schwartz, David; Zaccone, Dana

    2017-07-01

    Earthquake-related fault slip in the upper hundreds of meters of Earth's surface has remained largely unstudied because of challenges measuring deformation in the near field of a fault rupture. We analyze centimeter-scale accuracy mobile laser scanning (MLS) data of deformed vine rows within ±300 m of the principal surface expression of the M (magnitude) 6.0 2014 South Napa earthquake. Rather than assuming surface displacement equivalence to fault slip, we invert the near-field data with a model that allows for, but does not require, the fault to be buried below the surface. The inversion maps the position on a preexisting fault plane of a slip front that terminates ~3 to 25 m below the surface coseismically and within a few hours postseismically. The lack of surface-breaching fault slip is verified by two trenches. We estimate near-surface slip ranging from ~0.5 to 1.25 m. Surface displacement can underestimate fault slip by as much as 30%. This implies that similar biases could be present in short-term geologic slip rates used in seismic hazard analyses. Along strike and downdip, we find deficits in slip: The along-strike deficit is erased after ~1 month by afterslip. We find no evidence of off-fault deformation and conclude that the downdip shallow slip deficit for this event is likely an artifact. As near-field geodetic data rapidly proliferate and will become commonplace, we suggest that analyses of near-surface fault rupture should also use more sophisticated mechanical models and subsurface geomechanical tests.

  19. Probing Earth's State of Stress

    NASA Astrophysics Data System (ADS)

    Delorey, A. A.; Maceira, M.; Johnson, P. A.; Coblentz, D. D.

    2016-12-01

    The state of stress in the Earth's crust is a fundamental physical property that controls both engineered and natural systems. Engineered environments including those for hydrocarbon, geothermal energy, and mineral extraction, as well those for storage of wastewater, carbon dioxide, and nuclear fuel are as important as ever to our economy and environment. Yet, it is at spatial scales relevant to these activities where stress is least understood. Additionally, in engineered environments the rate of change in the stress field can be much higher than that of natural systems. In order to use subsurface resources more safely and effectively, we need to understand stress at the relevant temporal and spatial scales. We will present our latest results characterizing the state of stress in the Earth at scales relevant to engineered environments. Two important components of the state of stress are the orientation and magnitude of the stress tensor, and a measure of how close faults are to failure. The stress tensor at any point in a reservoir or repository has contributions from both far-field tectonic stress and local density heterogeneity. We jointly invert seismic (body and surface waves) and gravity data for a self-consistent model of elastic moduli and density and use the model to calculate the contribution of local heterogeneity to the total stress field. We then combine local and plate-scale contributions, using local indicators for calibration and ground-truth. In addition, we will present results from an analysis of the quantity and pattern of microseismicity as an indicator of critically stressed faults. Faults are triggered by transient stresses only when critically stressed (near failure). We show that tidal stresses can trigger earthquakes in both tectonic and reservoir environments and can reveal both stress and poroelastic conditions.

  20. Large-Scale Multiphase Flow Modeling of Hydrocarbon Migration and Fluid Sequestration in Faulted Cenozoic Sedimentary Basins, Southern California

    NASA Astrophysics Data System (ADS)

    Jung, B.; Garven, G.; Boles, J. R.

    2011-12-01

    Major fault systems play a first-order role in controlling fluid migration in the Earth's crust, and also in the genesis/preservation of hydrocarbon reservoirs in young sedimentary basins undergoing deformation, and therefore understanding the geohydrology of faults is essential for the successful exploration of energy resources. For actively deforming systems like the Santa Barbara Basin and Los Angeles Basin, we have found it useful to develop computational geohydrologic models to study the various coupled and nonlinear processes affecting multiphase fluid migration, including relative permeability, anisotropy, heterogeneity, capillarity, pore pressure, and phase saturation that affect hydrocarbon mobility within fault systems and to search the possible hydrogeologic conditions that enable the natural sequestration of prolific hydrocarbon reservoirs in these young basins. Subsurface geology, reservoir data (fluid pressure-temperature-chemistry), structural reconstructions, and seismic profiles provide important constraints for model geometry and parameter testing, and provide critical insight on how large-scale faults and aquifer networks influence the distribution and the hydrodynamics of liquid and gas-phase hydrocarbon migration. For example, pore pressure changes at a methane seepage site on the seafloor have been carefully analyzed to estimate large-scale fault permeability, which helps to constrain basin-scale natural gas migration models for the Santa Barbara Basin. We have developed our own 2-D multiphase finite element/finite IMPES numerical model, and successfully modeled hydrocarbon gas/liquid movement for intensely faulted and heterogeneous basin profiles of the Los Angeles Basin. Our simulations suggest that hydrocarbon reservoirs that are today aligned with the Newport-Inglewood Fault Zone were formed by massive hydrocarbon flows from deeply buried source beds in the central synclinal region during post-Miocene time. Fault permeability, capillarity

  1. Evolution of Information Management at the GSFC Earth Sciences (GES) Data and Information Services Center (DISC): 2006-2007

    NASA Technical Reports Server (NTRS)

    Kempler, Steven; Lynnes, Christopher; Vollmer, Bruce; Alcott, Gary; Berrick, Stephen

    2009-01-01

    Increasingly sophisticated National Aeronautics and Space Administration (NASA) Earth science missions have driven their associated data and data management systems from providing simple point-to-point archiving and retrieval to performing user-responsive distributed multisensor information extraction. To fully maximize the use of remote-sensor-generated Earth science data, NASA recognized the need for data systems that provide data access and manipulation capabilities responsive to research brought forth by advancing scientific analysis and the need to maximize the use and usability of the data. The decision by NASA to purposely evolve the Earth Observing System Data and Information System (EOSDIS) at the Goddard Space Flight Center (GSFC) Earth Sciences (GES) Data and Information Services Center (DISC) and other information management facilities was timely and appropriate. The GES DISC evolution was focused on replacing the EOSDIS Core System (ECS) by reusing the In-house developed disk-based Simple, Scalable, Script-based Science Product Archive (S4PA) data management system and migrating data to the disk archives. Transition was completed in December 2007

  2. Fault geometries in basement-induced wrench faulting under different initial stress states

    NASA Astrophysics Data System (ADS)

    Naylor, M. A.; Mandl, G.; Supesteijn, C. H. K.

    Scaled sandbox experiments were used to generate models for relative ages, dip, strike and three-dimensional shape of faults in basement-controlled wrench faulting. The basic fault sequence runs from early en échelon Riedel shears and splay faults through 'lower-angle' shears to P shears. The Riedel shears are concave upwards and define a tulip structure in cross-section. In three dimensions, each Riedel shear has a helicoidal form. The sequence of faults and three-dimensional geometry are rationalized in terms of the prevailing stress field and Coulomb-Mohr theory of shear failure. The stress state in the sedimentary overburden before wrenching begins has a substantial influence on the fault geometries and on the final complexity of the fault zone. With the maximum compressive stress (∂ 1) initially parallel to the basement fault (transtension), Riedel shears are only slightly en échelon, sub-parallel to the basement fault, steeply dipping with a reduced helicoidal aspect. Conversely, with ∂ 1 initially perpendicular to the basement fault (transpression), Riedel shears are strongly oblique to the basement fault strike, have lower dips and an exaggerated helicoidal form; the final fault zone is both wide and complex. We find good agreement between the models and both mechanical theory and natural examples of wrench faulting.

  3. Using GIS in an Earth Sciences Field Course for Quantitative Exploration, Data Management and Digital Mapping

    ERIC Educational Resources Information Center

    Marra, Wouter A.; van de Grint, Liesbeth; Alberti, Koko; Karssenberg, Derek

    2017-01-01

    Field courses are essential for subjects like Earth Sciences, Geography and Ecology. In these topics, GIS is used to manage and analyse spatial data, and offers quantitative methods that are beneficial for fieldwork. This paper presents changes made to a first-year Earth Sciences field course in the French Alps, where new GIS methods were…

  4. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  5. Fault zone property near Xinfengjiang Reservoir using dense, across-fault seismic array

    NASA Astrophysics Data System (ADS)

    Lee, M. H. B.; Yang, H.; Sun, X.

    2017-12-01

    Properties of fault zones are important to the understanding of earthquake process. Around the fault zone is a damaged zone which is characterised by a lower seismic velocity. This is detectable as a low velocity zone and measure some physical property of the fault zone, which is otherwise difficult sample directly. A dense, across-fault array of short period seismometer is deployed on an inactive fault near Xinfengjiang Reservoir. Local events were manually picked. By computing the synthetic arrival time, we were able to constrain the parameters of the fault zone Preliminary result shows that the fault zone is around 350 m wide with a P and S velocity increase of around 10%. The fault is geologically inferred, and this result suggested that it may be a geological layer. The other possibility is that the higher velocity is caused by a combination of fault zone healing and fluid intrusion. Whilst the result was not able to tell us the nature of the fault, it demonstrated that this method is able to derive properties from a fault zone.

  6. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2015-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes nearly 150 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies. Remote Sensing; Earth Science Informatics, Data Systems; Data Services; Metadata

  7. Evolving transpressional strain fields along the San Andreas fault in southern California: implications for fault branching, fault dip segmentation and strain partitioning

    NASA Astrophysics Data System (ADS)

    Bergh, Steffen; Sylvester, Arthur; Damte, Alula; Indrevær, Kjetil

    2014-05-01

    The San Andreas fault in southern California records only few large-magnitude earthquakes in historic time, and the recent activity is confined primarily on irregular and discontinuous strike-slip and thrust fault strands at shallow depths of ~5-20 km. Despite this fact, slip along the San Andreas fault is calculated to c. 35 mm/yr based on c.160 km total right lateral displacement for the southern segment of the fault in the last c. 8 Ma. Field observations also reveal complex fault strands and multiple events of deformation. The presently diffuse high-magnitude crustal movements may be explained by the deformation being largely distributed along more gently dipping reverse faults in fold-thrust belts, in contrast to regions to the north where deformation is less partitioned and localized to narrow strike-slip fault zones. In the Mecca Hills of the Salton trough transpressional deformation of an uplifted segment of the San Andreas fault in the last ca. 4.0 My is expressed by very complex fault-oblique and fault-parallel (en echelon) folding, and zones of uplift (fold-thrust belts), basement-involved reverse and strike-slip faults and accompanying multiple and pervasive cataclasis and conjugate fracturing of Miocene to Pleistocene sedimentary strata. Our structural analysis of the Mecca Hills addresses the kinematic nature of the San Andreas fault and mechanisms of uplift and strain-stress distribution along bent fault strands. The San Andreas fault and subsidiary faults define a wide spectrum of kinematic styles, from steep localized strike-slip faults, to moderate dipping faults related to oblique en echelon folds, and gently dipping faults distributed in fold-thrust belt domains. Therefore, the San Andreas fault is not a through-going, steep strike-slip crustal structure, which is commonly the basis for crustal modeling and earthquake rupture models. The fault trace was steep initially, but was later multiphase deformed/modified by oblique en echelon folding

  8. Rule-based fault diagnosis of hall sensors and fault-tolerant control of PMSM

    NASA Astrophysics Data System (ADS)

    Song, Ziyou; Li, Jianqiu; Ouyang, Minggao; Gu, Jing; Feng, Xuning; Lu, Dongbin

    2013-07-01

    Hall sensor is widely used for estimating rotor phase of permanent magnet synchronous motor(PMSM). And rotor position is an essential parameter of PMSM control algorithm, hence it is very dangerous if Hall senor faults occur. But there is scarcely any research focusing on fault diagnosis and fault-tolerant control of Hall sensor used in PMSM. From this standpoint, the Hall sensor faults which may occur during the PMSM operating are theoretically analyzed. According to the analysis results, the fault diagnosis algorithm of Hall sensor, which is based on three rules, is proposed to classify the fault phenomena accurately. The rotor phase estimation algorithms, based on one or two Hall sensor(s), are initialized to engender the fault-tolerant control algorithm. The fault diagnosis algorithm can detect 60 Hall fault phenomena in total as well as all detections can be fulfilled in 1/138 rotor rotation period. The fault-tolerant control algorithm can achieve a smooth torque production which means the same control effect as normal control mode (with three Hall sensors). Finally, the PMSM bench test verifies the accuracy and rapidity of fault diagnosis and fault-tolerant control strategies. The fault diagnosis algorithm can detect all Hall sensor faults promptly and fault-tolerant control algorithm allows the PMSM to face failure conditions of one or two Hall sensor(s). In addition, the transitions between health-control and fault-tolerant control conditions are smooth without any additional noise and harshness. Proposed algorithms can deal with the Hall sensor faults of PMSM in real applications, and can be provided to realize the fault diagnosis and fault-tolerant control of PMSM.

  9. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.

  10. The susitna glacier thrust fault: Characteristics of surface ruptures on the fault that initiated the 2002 denali fault earthquake

    USGS Publications Warehouse

    Crone, A.J.; Personius, S.F.; Craw, P.A.; Haeussler, P.J.; Staft, L.A.

    2004-01-01

    The 3 November 2002 Mw 7.9 Denali fault earthquake sequence initiated on the newly discovered Susitna Glacier thrust fault and caused 48 km of surface rupture. Rupture of the Susitna Glacier fault generated scarps on ice of the Susitna and West Fork glaciers and on tundra and surficial deposits along the southern front of the central Alaska Range. Based on detailed mapping, 27 topographic profiles, and field observations, we document the characteristics and slip distribution of the 2002 ruptures and describe evidence of pre-2002 ruptures on the fault. The 2002 surface faulting produced structures that range from simple folds on a single trace to complex thrust-fault ruptures and pressure ridges on multiple, sinuous strands. The deformation zone is locally more than 1 km wide. We measured a maximum vertical displacement of 5.4 m on the south-directed main thrust. North-directed backthrusts have more than 4 m of surface offset. We measured a well-constrained near-surface fault dip of about 19?? at one site, which is considerably less than seismologically determined values of 35??-48??. Surface-rupture data yield an estimated magnitude of Mw 7.3 for the fault, which is similar to the seismological value of Mw 7.2. Comparison of field and seismological data suggest that the Susitna Glacier fault is part of a large positive flower structure associated with northwest-directed transpressive deformation on the Denali fault. Prehistoric scarps are evidence of previous rupture of the Sustina Glacier fault, but additional work is needed to determine if past failures of the Susitna Glacier fault have consistently induced rupture of the Denali fault.

  11. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    NASA Astrophysics Data System (ADS)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  12. Mission Adaptive UAS Platform for Earth Science Resource Assessment

    NASA Technical Reports Server (NTRS)

    Dunagan, S.; Fladeland, M.; Ippolito, C.; Knudson, M.

    2015-01-01

    NASA Ames Research Center has led a number of important Earth science remote sensing missions including several directed at the assessment of natural resources. A key asset for accessing high risk airspace has been the 180 kg class SIERRA UAS platform, providing mission durations of up to 8 hrs at altitudes up to 3 km. Recent improvements to this mission capability are embodied in the incipient SIERRA-B variant. Two resource mapping problems having unusual mission characteristics requiring a mission adaptive capability are explored here. One example involves the requirement for careful control over solar angle geometry for passive reflectance measurements. This challenges the management of resources in the coastal ocean where solar angle combines with sea state to produce surface glint that can obscure the ocean color signal. Furthermore, as for all scanning imager applications, the primary flight control priority to fly the UAS directly to the next waypoint should compromise with the requirement to minimize roll and crab effects in the imagery. A second example involves the mapping of natural resources in the Earth's crust using precision magnetometry. In this case the vehicle flight path must be oriented to optimize magnetic flux gradients over a spatial domain having continually emerging features, while optimizing the efficiency of the spatial mapping task. These requirements were highlighted in several recent Earth Science missions including the October 2013 OCEANIA mission directed at improving the capability for hyperspectral reflectance measurements in the coastal ocean, and the Surprise Valley Mission directed at mapping sub-surface mineral composition and faults, using high-sensitivity magentometry. This paper reports the development of specific aircraft control approaches to incorporate the unusual and demanding requirements to manage solar angle, aircraft attitude and flight path orientation, and efficient (directly geo-rectified) surface and sub

  13. Evidence for Seismogenic Hydrogen Gas, a Potential Microbial Energy Source on Earth and Mars.

    PubMed

    McMahon, Sean; Parnell, John; Blamey, Nigel J F

    2016-09-01

    The oxidation of molecular hydrogen (H2) is thought to be a major source of metabolic energy for life in the deep subsurface on Earth, and it could likewise support any extant biosphere on Mars, where stable habitable environments are probably limited to the subsurface. Faulting and fracturing may stimulate the supply of H2 from several sources. We report the H2 content of fluids present in terrestrial rocks formed by brittle fracturing on fault planes (pseudotachylites and cataclasites), along with protolith control samples. The fluids are dominated by water and include H2 at abundances sufficient to support hydrogenotrophic microorganisms, with strong H2 enrichments in the pseudotachylites compared to the controls. Weaker and less consistent H2 enrichments are observed in the cataclasites, which represent less intense seismic friction than the pseudotachylites. The enrichments agree quantitatively with previous experimental measurements of frictionally driven H2 formation during rock fracturing. We find that conservative estimates of current martian global seismicity predict episodic H2 generation by Marsquakes in quantities useful to hydrogenotrophs over a range of scales and recurrence times. On both Earth and Mars, secondary release of H2 may also accompany the breakdown of ancient fault rocks, which are particularly abundant in the pervasively fractured martian crust. This study strengthens the case for the astrobiological investigation of ancient martian fracture systems. Deep biosphere-Faults-Fault rocks-Seismic activity-Hydrogen-Mars. Astrobiology 16, 690-702.

  14. Spectral element modelling of fault-plane reflections arising from fluid pressure distributions

    USGS Publications Warehouse

    Haney, M.; Snieder, R.; Ampuero, J.-P.; Hofmann, R.

    2007-01-01

    The presence of fault-plane reflections in seismic images, besides indicating the locations of faults, offers a possible source of information on the properties of these poorly understood zones. To better understand the physical mechanism giving rise to fault-plane reflections in compacting sedimentary basins, we numerically model the full elastic wavefield via the spectral element method (SEM) for several different fault models. Using well log data from the South Eugene Island field, offshore Louisiana, we derive empirical relationships between the elastic parameters (e.g. P-wave velocity and density) and the effective-stress along both normal compaction and unloading paths. These empirical relationships guide the numerical modelling and allow the investigation of how differences in fluid pressure modify the elastic wavefield. We choose to simulate the elastic wave equation via SEM since irregular model geometries can be accommodated and slip boundary conditions at an interface, such as a fault or fracture, are implemented naturally. The method we employ for including a slip interface retains the desirable qualities of SEM in that it is explicit in time and, therefore, does not require the inversion of a large matrix. We performa complete numerical study by forward modelling seismic shot gathers over a faulted earth model using SEM followed by seismic processing of the simulated data. With this procedure, we construct post-stack time-migrated images of the kind that are routinely interpreted in the seismic exploration industry. We dip filter the seismic images to highlight the fault-plane reflections prior to making amplitude maps along the fault plane. With these amplitude maps, we compare the reflectivity from the different fault models to diagnose which physical mechanism contributes most to observed fault reflectivity. To lend physical meaning to the properties of a locally weak fault zone characterized as a slip interface, we propose an equivalent-layer model

  15. Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.

    2017-12-01

    Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new

  16. The Fault Block Model: A novel approach for faulted gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ursin, J.R.; Moerkeseth, P.O.

    1994-12-31

    The Fault Block Model was designed for the development of gas production from Sleipner Vest. The reservoir consists of marginal marine sandstone of Hugine Formation. Modeling of highly faulted and compartmentalized reservoirs is severely impeded by the nature and extent of known and undetected faults and, in particular, their effectiveness as flow barrier. The model presented is efficient and superior to other models, for highly faulted reservoir, i.e. grid based simulators, because it minimizes the effect of major undetected faults and geological uncertainties. In this article the authors present the Fault Block Model as a new tool to better understandmore » the implications of geological uncertainty in faulted gas reservoirs with good productivity, with respect to uncertainty in well coverage and optimum gas recovery.« less

  17. Developing a Hayward Fault Greenbelt in Fremont, California

    NASA Astrophysics Data System (ADS)

    Blueford, J. R.

    2007-12-01

    The Math Science Nucleus, an educational non-profit, in cooperation with the City of Fremont and U.S. Geological Survey has concluded that outdoor and indoor exhibits highlighting the Hayward Fault is a spectacular and educational way of illustrating the power of earthquakes. Several projects are emerging that use the Hayward fault to illustrate to the public and school groups that faults mold the landscape upon which they live. One area that is already developed, Tule Ponds at Tyson Lagoon, is owned by Alameda County Flood Control and Conservation District and managed by the Math Science Nucleus. This 17 acre site illustrates two traces of the Hayward fault (active and inactive), whose sediments record over 4000 years of activity. Another project is selecting an area in Fremont that a permanent trench or outside earthquake exhibit can be created that people can see seismic stratigraphic features of the Hayward Fault. This would be part of a 3 mile Earthquake Greenbelt area from Tyson Lagoon to the proposed Irvington BART Station. Informational kiosks or markers and a "yellow brick road" of earthquake facts could allow visitors to take an exciting and educational tour of the Hayward Fault's surface features in Fremont. Visitors would visually see the effects of fault movement and the tours would include preparedness information. As these plans emerge, an indoor permanent exhibits is being developed at the Children's Natural History Museum in Fremont. This exhibit will be a model of the Earthquake Greenbelt. It will also allow people to see a scale model of how the Hayward Fault unearthed the Pleistocene fossil bed (Irvingtonian) as well as created traps for underground aquifers as well as surface sag ponds.

  18. Fault detection and fault tolerance in robotics

    NASA Technical Reports Server (NTRS)

    Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.

    1992-01-01

    Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.

  19. A remote sensing study of active folding and faulting in southern Kerman province, S.E. Iran

    NASA Astrophysics Data System (ADS)

    Walker, Richard Thomas

    2006-04-01

    Geomorphological observations reveal a major oblique fold-and-thrust belt in Kerman province, S.E. Iran. The active faults appear to link the Sabzevaran right-lateral strike-slip fault in southeast Iran to other strike-slip faults within the interior of the country and may provide the means of distributing right-lateral shear between the Zagros and Makran mountains over a wider region of central Iran. The Rafsanjan fault is manifest at the Earth's surface as right-lateral strike-slip fault scarps and folding in alluvial sediments. Height changes across the anticlines, and widespread incision of rivers, are likely to result from hanging-wall uplift above thrust faults at depth. Scarps in recent alluvium along the northern margins of the folds suggest that the thrusts reach the surface and are active at the present-day. The observations from Rafsanjan are used to identify similar late Quaternary faulting elsewhere in Kerman province near the towns of Mahan and Rayen. No instrumentally recorded destructive earthquakes have occurred in the study region and only one historical earthquake (Lalehzar, 1923) is recorded. In addition GPS studies show that present-day rates of deformation are low. However, fault structures in southern Kerman province do appear to be active in the late Quaternary and may be capable of producing destructive earthquakes in the future. This study shows how widely available remote sensing data can be used to provide information on the distribution of active faulting across large areas of deformation.

  20. Optimal fault-tolerant control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2017-10-01

    For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.

  1. Fault reactivation: The Picuris-Pecos fault system of north-central New Mexico

    NASA Astrophysics Data System (ADS)

    McDonald, David Wilson

    The PPFS is a N-trending fault system extending over 80 km in the Sangre de Cristo Mountains of northern New Mexico. Precambrian basement rocks are offset 37 km in a right-lateral sense; however, this offset includes dextral strike-slip (Precambrian), mostly normal dip-slip (Pennsylvanian), mostly reverse dip-slip (Early Laramide), limited strike-slip (Late Laramide) and mostly normal dip-slip (Cenozoic). The PPFS is broken into at least 3 segments by the NE-trending Embudo fault and by several Laramide age NW-trending tear faults. These segments are (from N to S): the Taos, the Picuris, and the Pecos segments. On the east side of the Picuris segment in the Picuris Mountains, the Oligocene-Miocene age Miranda graben developed and represents a complex extension zone south of the Embudo fault. Regional analysis of remotely sensed data and geologic maps indicate that lineaments subparallel to the trace of the PPFS are longer and less frequent than lineaments that trend orthogonal to the PPFS. Significant cross cutting faults and subtle changes in fault trends in each segment are clear in the lineament data. Detailed mapping in the eastern Picuris Mountains showed that the favorably oriented Picuris segment was not reactivated in the Tertiary development of the Rio Grande rift. Segmentation of the PPFS and post-Laramide annealing of the Picuris segment are interpreted to have resulted in the development of the subparallel La Serna fault. The Picuris segment of the PPFS is offset by several E-ESE trending faults. These faults are Late Cenozoic in age and interpreted to be related to the uplift of the Picuris Mountains and the continuing sinistral motion on the Embudo fault. Differential subsidence within the Miranda graben caused the development of several synthetic and orthogonal faults between the bounding La Serna and Miranda faults. Analysis of over 10,000 outcrop scale brittle structures reveals a strong correlation between faults and fracture systems. The dominant

  2. Research and Teaching About the Deep Earth

    NASA Astrophysics Data System (ADS)

    Williams, Michael L.; Mogk, David W.; McDaris, John

    2010-08-01

    Understanding the Deep Earth: Slabs, Drips, Plumes and More; Virtual Workshop, 17-19 February and 24-26 February 2010; Images and models of active faults, subducting plates, mantle drips, and rising plumes are spurring new excitement about deep-Earth processes and connections between Earth's internal systems and plate tectonics. The new results and the steady progress of Earthscope's USArray across the country are also providing a special opportunity to reach students and the general public. The pace of discoveries about the deep Earth is accelerating due to advances in experimental, modeling, and sensing technologies; new data processing capabilities; and installation of new networks, especially the EarthScope facility. EarthScope is an interdisciplinary program that combines geology and geophysics to study the structure and evolution of the North American continent. To explore the current state of deep-Earth science and ways in which it can be brought into the undergraduate classroom, 40 professors attended a virtual workshop given by On the Cutting Edge, a program that strives to improve undergraduate geoscience education through an integrated cooperative series of workshops and Web-based resources. The 6-day two-part workshop consisted of plenary talks, large and small group discussions, and development and review of new classroom and laboratory activities.

  3. A Comparison of Global Indexing Schemes to Facilitate Earth Science Data Management

    NASA Astrophysics Data System (ADS)

    Griessbaum, N.; Frew, J.; Rilee, M. L.; Kuo, K. S.

    2017-12-01

    Recent advances in database technology have led to systems optimized for managing petabyte-scale multidimensional arrays. These array databases are a good fit for subsets of the Earth's surface that can be projected into a rectangular coordinate system with acceptable geometric fidelity. However, for global analyses, array databases must address the same distortions and discontinuities that apply to map projections in general. The array database SciDB supports enormous databases spread across thousands of computing nodes. Additionally, the following SciDB characteristics are particularly germane to the coordinate system problem: SciDB efficiently stores and manipulates sparse (i.e. mostly empty) arrays. SciDB arrays have 64-bit indexes. SciDB supports user-defined data types, functions, and operators. We have implemented two geospatial indexing schemes in SciDB. The simplest uses two array dimensions to represent longitude and latitude. For representation as 64-bit integers, the coordinates are multiplied by a scale factor large enough to yield an appropriate Earth surface resolution (e.g., a scale factor of 100,000 yields a resolution of approximately 1m at the equator). Aside from the longitudinal discontinuity, the principal disadvantage of this scheme is its fixed scale factor. The second scheme uses a single array dimension to represent the bit-codes for locations in a hierarchical triangular mesh (HTM) coordinate system. A HTM maps the Earth's surface onto an octahedron, and then recursively subdivides each triangular face to the desired resolution. Earth surface locations are represented as the concatenation of an octahedron face code and a quadtree code within the face. Unlike our integerized lat-lon scheme, the HTM allow for objects of different size (e.g., pixels with differing resolutions) to be represented in the same indexing scheme. We present an evaluation of the relative utility of these two schemes for managing and analyzing MODIS swath data.

  4. Stafford fault system: 120 million year fault movement history of northern Virginia

    USGS Publications Warehouse

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  5. New Airborne LiDAR Survey of the Hayward Fault, Northern California

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Prentice, C. S.; Phillips, D. A.; Bevis, M.; Shrestha, R. L.

    2007-12-01

    We present a digital elevation model (DEM) constructed from newly acquired high-resolution LIght Detection and Ranging (LIDAR) data along the Hayward Fault in Northern California. The data were acquired by the National Center for Airborne Laser Mapping (NCALM) in the spring of 2007 in conjunction with a larger regional airborne LIDAR survey of the major crustal faults in northern California coordinated by UNAVCO and funded by the National Science Foundation as part of GeoEarthScope. A consortium composed of the U. S. Geological Survey, Pacific Gas & Electric Company, the San Francisco Public Utilities Commission, and the City of Berkeley separately funded the LIDAR acquisition along the Hayward Fault. Airborne LIDAR data were collected within a 106-km long by 1-km wide swath encompassing the Hayward Fault that extended from San Pablo Bay on the north to the southern end of its restraining stepover with the Calaveras Fault on the south. The Hayward Fault is among the most urbanized faults in the nation. With its most recent major rupture in 1868, it is well within the time window for its next large earthquake, making it an excellent candidate for a "before the earthquake" DEM image. After the next large Hayward Fault event, this DEM can be compared to a post-earthquake LIDAR DEM to provide a means for a detailed analysis of fault slip. In order to minimize location errors, temporary GPS ground control stations were deployed by Ohio State University, UNAVCO, and student volunteers from local universities to augment the available continuous GPS arrays operated in the study area by the Bay Area Regional Deformation (BARD) Network and the Plate Boundary Observatory (PBO). The vegetation cover varies along the fault zone: most of the vegetation is non-native species. Photographs from the 1860s show very little tall vegetation along the fault zone. A number of interesting geomorphic features are associated with the Hayward Fault, even in urbanized areas. Sag ponds and

  6. Using focal mechanism solutions to correlate earthquakes with faults in the Lake Tahoe-Truckee area, California and Nevada, and to help design LiDAR surveys for active-fault reconnaissance

    NASA Astrophysics Data System (ADS)

    Cronin, V. S.; Lindsay, R. D.

    2011-12-01

    Geomorphic analysis of hillshade images produced from aerial LiDAR data has been successful in identifying youthful fault traces. For example, the recently discovered Polaris fault just northwest of Lake Tahoe, California/Nevada, was recognized using LiDAR data that had been acquired by local government to assist land-use planning. Subsequent trenching by consultants under contract to the US Army Corps of Engineers has demonstrated Holocene displacement. The Polaris fault is inferred to be capable of generating a magnitude 6.4-6.9 earthquake, based on its apparent length and offset characteristics (Hunter and others, 2011, BSSA 101[3], 1162-1181). Dingler and others (2009, GSA Bull 121[7/8], 1089-1107) describe paleoseismic or geomorphic evidence for late Neogene displacement along other faults in the area, including the West Tahoe-Dollar Point, Stateline-North Tahoe, and Incline Village faults. We have used the seismo-lineament analysis method (SLAM; Cronin and others, 2008, Env Eng Geol 14[3], 199-219) to establish a tentative spatial correlation between each of the previously mentioned faults, as well as with segments of the Dog Valley fault system, and one or more earthquake(s). The ~18 earthquakes we have tentatively correlated with faults in the Tahoe-Truckee area occurred between 1966 and 2008, with magnitudes between 3 and ~6. Given the focal mechanism solution for a well-located shallow-focus earthquake, the nodal planes can be projected to Earth's surface as represented by a DEM, plus-or-minus the vertical and horizontal uncertainty in the focal location, to yield two seismo-lineament swaths. The trace of the fault that generated the earthquake is likely to be found within one of the two swaths [1] if the fault surface is emergent, and [2] if the fault surface is approximately planar in the vicinity of the focus. Seismo-lineaments from several of the earthquakes studied overlap in a manner that suggests they are associated with the same fault. The surface

  7. Application of ground-penetrating radar to investigation of near-surface fault properties in the San Francisco Bay region

    USGS Publications Warehouse

    Cai, J.; McMechan, G.A.; Fisher, M.A.

    1996-01-01

    In many geologic environments, ground-penetrating radar (GPR) provides high-resolution images of near-surface Earth structure. GPR data collection is nondestructive and very economical. The scale of features detected by GPR lies between those imaged by high-resolution seismic reflection surveys and those exposed in trenches and is therefore potentially complementary to traditional techniques for fault location and mapping. Sixty-two GPR profiles were collected at 12 sites in the San Francisco Bay region. Results show that GPR data correlate with large-scale features in existing trench observations, can be used to locate faults where they are buried or where their positions are not well known, and can identify previously unknown fault segments. The best data acquired were on a profile across the San Andreas fault, traversing Pleistocene terrace deposits south of Olema in Marin County; this profile shows a complicated multi-branched fault system from the ground surface down to about 40 m, the maximum depth for which data were recorded.

  8. Earth Science Informatics - Overview

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2017-01-01

    Over the last 10-15 years, significant advances have been made in information management, there are an increasing number of individuals entering the field of information management as it applies to Geoscience and Remote Sensing data, and the field of informatics has come to its own. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of science data, information, and knowledge. Informatics also includes the use of computers and computational methods to support decision making and applications. Earth Science Informatics (ESI, a.k.a. geoinformatics) is the application of informatics in the Earth science domain. ESI is a rapidly developing discipline integrating computer science, information science, and Earth science. Major national and international research and infrastructure projects in ESI have been carried out or are on-going. Notable among these are: the Global Earth Observation System of Systems (GEOSS), the European Commissions INSPIRE, the U.S. NSDI and Geospatial One-Stop, the NASA EOSDIS, and the NSF DataONE, EarthCube and Cyberinfrastructure for Geoinformatics. More than 18 departments and agencies in the U.S. federal government have been active in Earth science informatics. All major space agencies in the world, have been involved in ESI research and application activities. In the United States, the Federation of Earth Science Information Partners (ESIP), whose membership includes over 180 organizations (government, academic and commercial) dedicated to managing, delivering and applying Earth science data, has been working on many ESI topics since 1998. The Committee on Earth Observation Satellites (CEOS)s Working Group on Information Systems and Services (WGISS) has been actively coordinating the ESI activities among the space agencies.The talk will present an overview of current efforts in ESI, the role members of IEEE GRSS play, and discuss

  9. Interseismic Deformation across the Eastern Altyn Tagh Fault from Insar Measurements

    NASA Astrophysics Data System (ADS)

    Liu, C. J.; Zhao, C. Y.; Ji, L. Y.; Zhang, Z. R.; Sun, H.

    2018-04-01

    As a new type of earth observation technique, InSAR has a lot of advantages, such as all-weather, all-time, high precision, high density, wide coverage and low cost. It has been widely used in deformation monitoring. Taking the eastern segment of Altyn Tagh fault (ATF) as the object of the research, this paper discussed the application of multi-temporal InSAR technology in the field of interseismic deformation monitoring. We measured the interseismic deformation along the eastern section of ATF using three neighboring descending tracks SAR data from the ERS and Envisat missions. The results show that, first, the validation of InSAR results is better than 2.5 mm/yr, the calibration of InSAR results is about 1.06 mm/yr. Second, the fault slip rate in this segment is about 4-7 mm/yr, and is in the locked condition. Third, The InSAR velocity profile across the fault is the clear asymmetry with respect to ATF, it may be the combined effect of northern (NATF) and southern (SATF) branches of ATF.

  10. Momentum Management for the NASA Near Earth Asteroid Scout Solar Sail Mission

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew; Diedrich, Benjamin L.; Orphee, Juan; Stiltner, Brandon; Becker, Christopher

    2017-01-01

    The Momentum Management (MM) system is described for the NASA Near Earth Asteroid Scout (NEA Scout) cubesat solar sail mission. Unlike many solar sail mission proposals that used solar torque as the primary or only attitude control system, NEA Scout uses small reaction wheels (RW) and a reaction control system (RCS) with cold gas thrusters, as described in the abstract "Solar Sail Attitude Control System for Near Earth Asteroid Scout Cubesat Mission." The reaction wheels allow fine pointing and higher rates with low mass actuators to meet the science, communication, and trajectory guidance requirements. The MM system keeps the speed of the wheels within their operating margins using a combination of solar torque and the RCS.

  11. Airborne LiDAR analysis and geochronology of faulted glacial moraines in the Tahoe-Sierra frontal fault zone reveal substantial seismic hazards in the Lake Tahoe region, California-Nevada USA

    USGS Publications Warehouse

    Howle, James F.; Bawden, Gerald W.; Schweickert, Richard A.; Finkel, Robert C.; Hunter, Lewis E.; Rose, Ronn S.; von Twistern, Brent

    2012-01-01

    We integrated high-resolution bare-earth airborne light detection and ranging (LiDAR) imagery with field observations and modern geochronology to characterize the Tahoe-Sierra frontal fault zone, which forms the neotectonic boundary between the Sierra Nevada and the Basin and Range Province west of Lake Tahoe. The LiDAR imagery clearly delineates active normal faults that have displaced late Pleistocene glacial moraines and Holocene alluvium along 30 km of linear, right-stepping range front of the Tahoe-Sierra frontal fault zone. Herein, we illustrate and describe the tectonic geomorphology of faulted lateral moraines. We have developed new, three-dimensional modeling techniques that utilize the high-resolution LiDAR data to determine tectonic displacements of moraine crests and alluvium. The statistically robust displacement models combined with new ages of the displaced Tioga (20.8 ± 1.4 ka) and Tahoe (69.2 ± 4.8 ka; 73.2 ± 8.7 ka) moraines are used to estimate the minimum vertical separation rate at 17 sites along the Tahoe-Sierra frontal fault zone. Near the northern end of the study area, the minimum vertical separation rate is 1.5 ± 0.4 mm/yr, which represents a two- to threefold increase in estimates of seismic moment for the Lake Tahoe basin. From this study, we conclude that potential earthquake moment magnitudes (Mw) range from 6.3 ± 0.25 to 6.9 ± 0.25. A close spatial association of landslides and active faults suggests that landslides have been seismically triggered. Our study underscores that the Tahoe-Sierra frontal fault zone poses substantial seismic and landslide hazards.

  12. Fault-scale controls on rift geometry: the Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, A.; Biggs, J.; Mdala, H. S.

    2017-12-01

    Border faults that develop during initial stages of rifting determine the geometry of rifts and passive margins. At outcrop and regional scales, it has been suggested that border fault orientation may be controlled by reactivation of pre-existing weaknesses. Here, we perform a multi-scale investigation on the influence of anisotropic fabrics along a major developing border fault in the southern East African Rift, Malawi. The 130 km long Bilila-Mtakataka fault has been proposed to have slipped in a single MW 8 earthquake with 10 m of normal displacement. The fault is marked by an 11±7 m high scarp with an average trend that is oblique to the current plate motion. Variations in scarp height are greatest at lithological boundaries and where the scarp switches between following and cross-cutting high-grade metamorphic foliation. Based on the scarp's geometry and morphology, we define 6 geometrically distinct segments. We suggest that the segments link to at least one deeper structure that strikes parallel to the average scarp trend, an orientation consistent with the kinematics of an early phase of rift initiation. The slip required on a deep fault(s) to match the height of the current scarp suggests multiple earthquakes along the fault. We test this hypothesis by studying the scarp morphology using high-resolution satellite data. Our results suggest that during the earthquake(s) that formed the current scarp, the propagation of the fault toward the surface locally followed moderately-dipping foliation well oriented for reactivation. In conclusion, although well oriented pre-existing weaknesses locally influence shallow fault geometry, large-scale border fault geometry appears primarily controlled by the stress field at the time of fault initiation.

  13. Tectono-stratigraphic evolution of normal fault zones: Thal Fault Zone, Suez Rift, Egypt

    NASA Astrophysics Data System (ADS)

    Leppard, Christopher William

    The evolution of linkage of normal fault populations to form continuous, basin bounding normal fault zones is recognised as an important control on the stratigraphic evolution of rift-basins. This project aims to investigate the temporal and spatial evolution of normal fault populations and associated syn-rift deposits from the initiation of early-formed, isolated normal faults (rift-initiation) to the development of a through-going fault zone (rift-climax) by documenting the tectono-stratigraphic evolution of the Sarbut EI Gamal segment of the exceptionally well-exposed Thai fault zone, Suez Rift, Egypt. A number of dated stratal surfaces mapped around the syn-rift depocentre of the Sarbut El Gamal segment allow constraints to be placed on the timing and style of deformation, and the spatial variability of facies along this segment of the fault zone. Data collected indicates that during the first 3.5 My of rifting the structural style was characterised by numerous, closely spaced, short (< 3 km), low displacement (< 200 m) synthetic and antithetic normal faults within 1 - 2 km of the present-day fault segment trace, accommodating surface deformation associated with the development of a fault propagation monocline above the buried, pre-cursor strands of the Sarbut El Gamal fault segment. The progressive localisation of displacement onto the fault segment during rift-climax resulted in the development of a major, surface-breaking fault 3.5 - 5 My after the onset of rifting and is recorded by the death of early-formed synthetic and antithetic faults up-section, and thickening of syn-rift strata towards the fault segment. The influence of intrabasinal highs at the tips of the Sarbut EI Gamal fault segment on the pre-rift sub-crop level, combined with observations from the early-formed structures and coeval deposits suggest that the overall length of the fault segment was fixed from an early stage. The fault segment is interpreted to have grown through rapid lateral

  14. S-velocity structure in Cimandiri fault zone derived from neighbourhood inversion of teleseismic receiver functions

    NASA Astrophysics Data System (ADS)

    Syuhada; Anggono, T.; Febriani, F.; Ramdhan, M.

    2018-03-01

    The availability information about realistic velocity earth model in the fault zone is crucial in order to quantify seismic hazard analysis, such as ground motion modelling, determination of earthquake locations and focal mechanism. In this report, we use teleseismic receiver function to invert the S-velocity model beneath a seismic station located in the Cimandiri fault zone using neighbourhood algorithm inversion method. The result suggests the crustal thickness beneath the station is about 32-38 km. Furthermore, low velocity layers with high Vp/Vs exists in the lower crust, which may indicate the presence of hot material ascending from the subducted slab.

  15. Fault kinematics and localised inversion within the Troms-Finnmark Fault Complex, SW Barents Sea

    NASA Astrophysics Data System (ADS)

    Zervas, I.; Omosanya, K. O.; Lippard, S. J.; Johansen, S. E.

    2018-04-01

    The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

  16. Determination of Seismic Activity on the Main Marmara Fault with GPS Measurements

    NASA Astrophysics Data System (ADS)

    Alkan, M. N.; Alkan, R. M.; Yavaşoğlu, H.; Köse, Z.; Aladoğan, K.; Özbey, V.

    2017-12-01

    The tectonic plates that creates the Earth have always been an important topic to work on for Geosciences. Plate motion affecting the Earth's crust have occurred for millions of years. This slow but continuous movement that has been going on for millions of years can only be followed by instrumental measurements. In recent years, this process has been done with GPS very accurately. The North Anatolian Fault (NAF) is a major right-lateral, strike-slip fault that extends more than 1200 km extends along all North Anatolia from Bingol to Saros Gulf. The NAFZ is divided into Southern and Northern Branches to the east of Marmara region that several destructive earthquakes occurred, such as Izmit (in 1999, Mw=7.4) and Duzce (in 1999, Mw=7.2) in the last century. MMF (Main Marmara Fault) which is the part of the Northern Branch in the Marmara Sea, starting in from the Gulf of Izmit-Adapazarı and reaching the Gulf of Saros. The determination of the deformation accumulated on the MMF has become extremely important especially after the 1999 Izmit earthquake. According to the recent studies, the MMF is the largest unbroken part of the fault and is divided into segments. These segments are Cinarcik, Prince Island, Central Marmara and Tekirdag. Recent studies have demonstrated that the Prince Island segment is fully locked. However, studies that are focused on the Central Marmara segment, that is located offshore Istanbul, a giant metropole that has more than 14 million populations, do not conclude about the presence of a seismic gap, capable of generating a big earthquake. Therefore, in the scope of this study, a new GPS network was established at short and long distance from the Main Marmara Fault, to densify the existing GPS network. 3 campaign GPS measurements were done in 2015, 2016, 2017. The evaluation of the datasets were done by GAMIT/GLOBK software. For the evaluation, 30 continuous observation stations, 14 stations connected to the IGS network and 16 stations

  17. Massive Hydrothermal Flows of Fluids and Heat: Earth Constraints and Ocean World Considerations

    NASA Astrophysics Data System (ADS)

    Fisher, A. T.

    2018-05-01

    This presentation reviews the hydrogeologic nature of Earth's ocean crust and evidence for massive flows of low-temperature (≤70°C), seafloor hydrothermal circulation through ridge flanks, including the influence of crustal relief and crustal faults.

  18. Pseudo-fault signal assisted EMD for fault detection and isolation in rotating machines

    NASA Astrophysics Data System (ADS)

    Singh, Dheeraj Sharan; Zhao, Qing

    2016-12-01

    This paper presents a novel data driven technique for the detection and isolation of faults, which generate impacts in a rotating equipment. The technique is built upon the principles of empirical mode decomposition (EMD), envelope analysis and pseudo-fault signal for fault separation. Firstly, the most dominant intrinsic mode function (IMF) is identified using EMD of a raw signal, which contains all the necessary information about the faults. The envelope of this IMF is often modulated with multiple vibration sources and noise. A second level decomposition is performed by applying pseudo-fault signal (PFS) assisted EMD on the envelope. A pseudo-fault signal is constructed based on the known fault characteristic frequency of the particular machine. The objective of using external (pseudo-fault) signal is to isolate different fault frequencies, present in the envelope . The pseudo-fault signal serves dual purposes: (i) it solves the mode mixing problem inherent in EMD, (ii) it isolates and quantifies a particular fault frequency component. The proposed technique is suitable for real-time implementation, which has also been validated on simulated fault and experimental data corresponding to a bearing and a gear-box set-up, respectively.

  19. Magnetotelluric Studies of Fault Zones Surrounding the 2016 Pawnee, Oklahoma Earthquake

    NASA Astrophysics Data System (ADS)

    Evans, R. L.; Key, K.; Atekwana, E. A.

    2016-12-01

    Since 2008, there has been a dramatic increase in earthquake activity in the central United States in association with major oil and gas operations. Oklahoma is now considered one the most seismically active states. Although seismic networks are able to detect activity and map its locus, they are unable to image the distribution of fluids in the fault responsible for triggering seismicity. Electrical geophysical methods are ideally suited to image fluid bearing faults since the injected waste-waters are highly saline and hence have a high electrical conductivity. To date, no study has imaged the fluids in the faults in Oklahoma and made a direct link to the seismicity. The 2016 M5.8 Pawnee, Oklahoma earthquake provides an unprecedented opportunity for scientists to provide that link. Several injection wells are located within a 20 km radius of the epicenter; and studies have suggested that injection of fluids in high-volume wells can trigger earthquakes as far away as 30 km. During late October to early November, 2016, we are collecting magnetotelluric (MT) data with the aim of constraining the distribution of fluids in the fault zone. The MT technique uses naturally occurring electric and magnetic fields measured at Earth's surface to measure conductivity structure. We plan to carry out a series of short two-dimensional (2D) profiles of wideband MT acquisition located through areas where the fault recently ruptured and seismic activity is concentrated and also across the faults in the vicinity that did not rupture. The integration of our results and ongoing seismic studies will lead to a better understanding of the links between fluid injection and seismicity.

  20. Equivalent strike-slip earthquake cycles in half-space and lithosphere-asthenosphere earth models

    USGS Publications Warehouse

    Savage, J.C.

    1990-01-01

    By virtue of the images used in the dislocation solution, the deformation at the free surface produced throughout the earthquake cycle by slippage on a long strike-slip fault in an Earth model consisting of an elastic plate (lithosphere) overlying a viscoelastic half-space (asthenosphere) can be duplicated by prescribed slip on a vertical fault embedded in an elastic half-space. Inversion of 1973-1988 geodetic measurements of deformation across the segment of the San Andreas fault in the Transverse Ranges north of Los Angeles for the half-space equivalent slip distribution suggests no significant slip on the fault above 30 km and a uniform slip rate of 36 mm/yr below 30 km. One equivalent lithosphere-asthenosphere model would have a 30-km thick lithosphere and an asthenosphere relaxation time greater than 33 years, but other models are possible. -from Author

  1. 3D Fault Network of the Murchison Domain, Yilgarn Craton

    NASA Astrophysics Data System (ADS)

    Murdie, Ruth; Gessner, Klaus

    2014-05-01

    The architecture of Archean granite-greenstone terranes is often controlled by networks of 10 km to 100 km-scale shear zones that record displacement under amphibolite facies to greenschist facies metamorphic conditions. The geometry of such crustal scale 'fault networks' has been shown to be highly relevant to understand the tectonic and metamorphic history of granite-greenstone terranes, as well as the availability of structural controlled fluid pathways related to magmatic and hydrothermal mineralization. The Neoarchean Yilgarn Craton and the Proterozoic orogens around its margins constitute one of Earth's greatest mineral treasure troves, including iron, gold, copper and nickel mineral deposits. Whereas the Yilgarn Craton is one of the best studied Archean cratons, its enormous size and limited outcrop are detrimental to the better understanding of what controls the distribution of these vast resources and what geodynamic processes were involved the tectonic assembly of this part of the Australian continent. Here we present a network of the major faults of the NW Yilgarn Craton between the Yalgar Fault, Murchison's NW contact with the Narryer Terrane to the Ida Fault, its boundary with the Eastern Goldfields Superterrane. The model has been constructed from various geophysical and geological data, including potential field grids, Geological Survey of Western Australia map sheets, seismic reflection surveys and magnetotelluric traverses. The northern extremity of the modelled area is bounded by the Proterozoic cover and the southern limit has been arbitrarily chosen to include various greenstone belts. In the west, the major faults in the upper crust, such as the Carbar and Chundaloo Shear Zones, dip steeply towards the west and then flatten off at depth. They form complex branching fault systems that bound the greenstone belts in a series of stacked faults. East of the Ida, the far east of the model, the faults have been integrated with Geoscience Australia

  2. Using Google Earth to Explore Strain Rate Models of Southern California

    NASA Astrophysics Data System (ADS)

    Richard, G. A.; Bell, E. A.; Holt, W. E.

    2007-12-01

    A series of strain rate models for the Transverse Ranges of southern California were developed based on Quaternary fault slip data and geodetic data from high precision GPS stations in southern California. Pacific-North America velocity boundary conditions are applied for all models. Topography changes are calculated using the model dilatation rates, which predict crustal thickness changes under the assumption of Airy isostasy and a specified rate of crustal volume loss through erosion. The models were designed to produce graphical and numerical output representing the configuration of the region from 3 million years ago to 3 million years into the future at intervals of 50 thousand years. Using a North American reference frame, graphical output for the topography and faults and numerical output for locations of faults and points on the crust marked by the locations on cities were used to create data in KML format that can be used in Google Earth to represent time intervals of 50 thousand years. As markers familiar to students, the cities provide a geographic context that can be used to quantify crustal movement, using the Google Earth ruler tool. By comparing distances that markers for selected cities have moved in various parts of the region, students discover that the greatest amount of crustal deformation has occurred in the vicinity of the boundary between the North American and Pacific plates. Students can also identify areas of compression or extension by finding pairs of city markers that have converged or diverged, respectively, over time. The Google Earth layers also reveal that faults that are not parallel to the plate boundary have tended to rotate clockwise due to the right lateral motion along the plate boundary zone. KML TimeSpan markup was added to two versions of the model, enabling the layers to be displayed in an automatic sequenced loop for a movie effect. The data is also available as QuickTime (.mov) and Graphics Interchange Format (.gif

  3. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  4. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  5. Coulombic faulting from the grain scale to the geophysical scale: lessons from ice

    NASA Astrophysics Data System (ADS)

    Weiss, Jérôme; Schulson, Erland M.

    2009-11-01

    Coulombic faulting, a concept formulated more than two centuries ago, still remains pertinent in describing the brittle compressive failure of various materials, including rocks and ice. Many questions remain, however, about the physical processes underlying this macroscopic phenomenology. This paper reviews the progress made in these directions during the past few years through the study of ice and its mechanical behaviour in both the laboratory and the field. Fault triggering is associated with the formation of specific features called comb-cracks and involves frictional sliding at the micro(grain)-scale. Similar mechanisms are observed at geophysical scales within the sea ice cover. This scale-independent physics is expressed by the same Coulombic phenomenology from laboratory to geophysical scales, with a very similar internal friction coefficient (μ ≈ 0.8). On the other hand, the cohesion strongly decreases with increasing spatial scale, reflecting the role of stress concentrators on fault initiation. Strong similarities also exist between ice and other brittle materials such as rocks and minerals and between faulting of the sea ice cover and Earth's crust, arguing for the ubiquitous nature of the underlying physics.

  6. A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R

    The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.

  7. Fracture structures of active Nojima fault, Japan, revealed by borehole televiewer imaging

    NASA Astrophysics Data System (ADS)

    Nishiwaki, T.; Lin, A.

    2017-12-01

    Most large intraplate earthquakes occur as slip on mature active faults, any investigation of the seismic faulting process and assessment of seismic hazards require an understanding of the nature of active fault damage zones as seismogenic source. In this study, we focus on the fracture structures of the Nojima Fault (NF) that triggered the 1995 Kobe Mw 7.2 earthquake using ultrasonic borehole televiewer (BHTV) images from a borehole wall. The borehole used in this study was drilled throughout the NF at 1000 m in depth by a science project of Drilling into Fault Damage Zone(DFDZ) in 2016 (Lin, 2016; Miyawaki et al., 2016). In the depth of <230 m of the borehole, the rocks are composed of weak consolidated sandstone and conglomerate of the Plio-Pleistocene Osaka-Group and mudstone and sandstone of the Miocene Kobe Group. The basement rock in the depth of >230 m consist of pre-Neogene granitic rock. Based on the observations of cores and analysis of the BHTV images, the main fault plane was identified at a depth of 529.3 m with a 15 cm thick fault gouge zone and a damage zone of 100 m wide developed in the both sides of the main fault plane. Analysis of the BHTV images shows that the fractures are concentrated in two groups: N45°E (Group-1), parallel to the general trend of the NF, and another strikes N70°E (Group-2), oblique to the fault with an angle of 20°. It is well known that Riedel shear structures are common within strike-slip fault zones. Previous studies show that the NF is a right-lateral strike-slip fault with a minor thrust component, and that the fault damage zone is characterized by Riedel shear structures dominated by Y shears (main faults), R shears and P foliations (Lin, 2001). We interpret that the fractures of Group (1) correspond to Y Riedel fault shears, and those of Group (2) are R shears. Such Riedel shear structures indicate that the NF is a right-lateral strike-slip fault which is activated under a regional stress field oriented to the

  8. Fracture zone drilling through Atotsugawa fault in central Japan - geological and geophysical structure -

    NASA Astrophysics Data System (ADS)

    Omura, K.; Yamashita, F.; Yamada, R.; Matsuda, T.; Fukuyama, E.; Kubo, A.; Takai, K.; Ikeda, R.; Mizuochi, Y.

    2004-12-01

    Drilling is an effective method to investigate the structure and physical state in and around the active fault zone, such as, stress and strength distribution, geological structure and materials properties. In particular, the structure in the fault zone is important to understand where and how the stress accumulates during the earthquake cycle. In previous studies, we did integrate investigation on active faults in central Japan by drilling and geophysical prospecting. Those faults are estimated to be at different stage in the earthquake cycle, i.e., Nojima fault which appeared on the surface by the 1995 Great Kobe earthquake (M=7.2), the Neodani fault which appeared by the 1891 Nobi earth-quake (M=8.0), the Atera fault, of which some parts have seemed to be dislocated by the 1586 Tensyo earthquake (M=7.9), and Gofukuji Fault that is considered to have activated about 1200 years ago. Each faults showed characteristic features of fracture zone structure according to their geological and geophysical situations. In a present study, we did core recovery and down hole measurements at the Atotsugawa fault, central Japan, that is considered to have activated at 1858 Hida earthquake (M=7.0). The Atotsugawa fault is characterized by active seismicity along the fault. But, at the same time, the shallow region in the central segment of the fault seems to have low seismicity. The high seismicity segment and low seismicity segments may have different mechanical, physical and material properties. A 350m depth borehole was drilled vertically beside the surface trace of the fault in the low seismicity segment. Recovered cores were overall heavily fractured and altered rocks. In the cores, we observed many shear planes holding fault gouge. Logging data showed that the apparent resistance was about 100 - 600 ohm-m, density was about 2.0 - 2.5g/cm3, P wave velocity was approximately 3.0 - 4.0 km/sec, neutron porosity was 20 - 40 %. Results of physical logging show features of fault

  9. Experimental study on propagation of fault slip along a simulated rock fault

    NASA Astrophysics Data System (ADS)

    Mizoguchi, K.

    2015-12-01

    Around pre-existing geological faults in the crust, we have often observed off-fault damage zone where there are many fractures with various scales, from ~ mm to ~ m and their density typically increases with proximity to the fault. One of the fracture formation processes is considered to be dynamic shear rupture propagation on the faults, which leads to the occurrence of earthquakes. Here, I have conducted experiments on propagation of fault slip along a pre-cut rock surface to investigate the damaging behavior of rocks with slip propagation. For the experiments, I used a pair of metagabbro blocks from Tamil Nadu, India, of which the contacting surface simulates a fault of 35 cm in length and 1cm width. The experiments were done with the similar uniaxial loading configuration to Rosakis et al. (2007). Axial load σ is applied to the fault plane with an angle 60° to the loading direction. When σ is 5kN, normal and shear stresses on the fault are 1.25MPa and 0.72MPa, respectively. Timing and direction of slip propagation on the fault during the experiments were monitored with several strain gauges arrayed at an interval along the fault. The gauge data were digitally recorded with a 1MHz sampling rate and 16bit resolution. When σ is 4.8kN is applied, we observed some fault slip events where a slip nucleates spontaneously in a subsection of the fault and propagates to the whole fault. However, the propagation speed is about 1.2km/s, much lower than the S-wave velocity of the rock. This indicates that the slip events were not earthquake-like dynamic rupture ones. More efforts are needed to reproduce earthquake-like slip events in the experiments. This work is supported by the JSPS KAKENHI (26870912).

  10. Simple random sampling-based probe station selection for fault detection in wireless sensor networks.

    PubMed

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.

  11. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  12. Mission Adaptive Uas Capabilities for Earth Science and Resource Assessment

    NASA Astrophysics Data System (ADS)

    Dunagan, S.; Fladeland, M.; Ippolito, C.; Knudson, M.; Young, Z.

    2015-04-01

    Unmanned aircraft systems (UAS) are important assets for accessing high risk airspace and incorporate technologies for sensor coordination, onboard processing, tele-communication, unconventional flight control, and ground based monitoring and optimization. These capabilities permit adaptive mission management in the face of complex requirements and chaotic external influences. NASA Ames Research Center has led a number of Earth science remote sensing missions directed at the assessment of natural resources and here we describe two resource mapping problems having mission characteristics requiring a mission adaptive capability extensible to other resource assessment challenges. One example involves the requirement for careful control over solar angle geometry for passive reflectance measurements. This constraint exists when collecting imaging spectroscopy data over vegetation for time series analysis or for the coastal ocean where solar angle combines with sea state to produce surface glint that can obscure the signal. Furthermore, the primary flight control imperative to minimize tracking error should compromise with the requirement to minimize aircraft motion artifacts in the spatial measurement distribution. A second example involves mapping of natural resources in the Earth's crust using precision magnetometry. In this case the vehicle flight path must be oriented to optimize magnetic flux gradients over a spatial domain having continually emerging features, while optimizing the efficiency of the spatial mapping task. These requirements were highlighted in recent Earth Science missions including the OCEANIA mission directed at improving the capability for spectral and radiometric reflectance measurements in the coastal ocean, and the Surprise Valley Mission directed at mapping sub-surface mineral composition and faults, using high-sensitivity magnetometry. This paper reports the development of specific aircraft control approaches to incorporate the unusual and

  13. Deformations resulting from the movements of a shear or tensile fault in an anisotropic half space

    NASA Astrophysics Data System (ADS)

    Sheu, Guang Y.

    2004-04-01

    Earlier solutions (Bull. Seismol. Soc. Amer. 1985; 75:1135-1154; Bull. Seismol. Soc. Amer. 1992; 82:1018-1040) of deformations caused by the movements of a shear or tensile fault in an isotropic half-space for finite rectangular sources of strain nucleus have been extended for a transversely isotropic half-space. Results of integrating previous solutions (Int. J. Numer. Anal. Meth. Geomech. 2001; 25(10): 1175-1193) of deformations due to a shear or tensile fault in a transversely isotropic half-space for point sources of strain nucleus over the fault plane are presented. In addition, a boundary element (BEM) model (POLY3D:A three-dimensional, polygonal element, displacement discontinuity boundary element computer program with applications to fractures, faults, and cavities in the Earth's crust. M.S. Thesis, Stanford University, Department of Geology, 1993; 62) is given. Different from similar researches (e.g. Thomas), the Akaike's view on Bayesian statistics (Akaike Information Criterion Statistics. D. Reidel Publication: Dordrecht, 1986) is applied for inverting deformations due to a fault to obtain displacement discontinuities on the fault plane.An example is given for checking displacements predicted by proposed analytical expressions. Another example is generated for the use of proposed BEM model. It demonstrates the effectiveness of this model in exploring displacement behaviours of a fault. Copyright

  14. Surface faulting along the Superstition Hills fault zone and nearby faults associated with the earthquakes of 24 November 1987

    USGS Publications Warehouse

    Sharp, R.V.

    1989-01-01

    The M6.2 Elmore Desert Ranch earthquake of 24 November 1987 was associated spatially and probably temporally with left-lateral surface rupture on many northeast-trending faults in and near the Superstition Hills in western Imperial Valley. Three curving discontinuous principal zones of rupture among these breaks extended northeastward from near the Superstition Hills fault zone as far as 9km; the maximum observed surface slip, 12.5cm, was on the northern of the three, the Elmore Ranch fault, at a point near the epicenter. Twelve hours after the Elmore Ranch earthquake, the M6.6 Superstition Hills earthquake occurred near the northwest end of the right-lateral Superstition Hills fault zone. We measured displacements over 339 days at as many as 296 sites along the Superstition Hills fault zone, and repeated measurements at 49 sites provided sufficient data to fit with a simple power law. The overall distributions of right-lateral displacement at 1 day and the estimated final slip are nearly symmetrical about the midpoint of the surface rupture. The average estimated final right-lateral slip for the Superstition Hills fault zone is ~54cm. The average left-lateral slip for the conjugate faults trending northeastward is ~23cm. The southernmost ruptured member of the Superstition Hills fault zone, newly named the Wienert fault, extends the known length of the zone by about 4km. -from Authors

  15. On providing the fault-tolerant operation of information systems based on open content management systems

    NASA Astrophysics Data System (ADS)

    Kratov, Sergey

    2018-01-01

    Modern information systems designed to service a wide range of users, regardless of their subject area, are increasingly based on Web technologies and are available to users via Internet. The article discusses the issues of providing the fault-tolerant operation of such information systems, based on free and open source content management systems. The toolkit available to administrators of similar systems is shown; the scenarios for using these tools are described. Options for organizing backups and restoring the operability of systems after failures are suggested. Application of the proposed methods and approaches allows providing continuous monitoring of the state of systems, timely response to the emergence of possible problems and their prompt solution.

  16. Coseismic changes of gravitational potential energy induced by global earthquakes based on spherical-Earth elastic dislocation theory

    NASA Astrophysics Data System (ADS)

    Xu, Changyi; Chao, B. Fong

    2017-05-01

    We compute the coseismic gravitational potential energy Eg change using the spherical-Earth elastic dislocation theory and either the fault model treated as a point source or the finite fault model. The rate of the accumulative Eg loss produced by historical earthquakes from 1976 to 2016 (about 42,000 events) using the Global Centroid Moment Tensor Solution catalogue is estimated to be on the order of -2.1 × 1020 J/a, or -6.7 TW (1 TW = 1012 W), amounting to 15% in the total terrestrial heat flow. The energy loss is dominated by the thrust faulting, especially the megathrust earthquakes such as the 2004 Sumatra earthquake (Mw 9.0) and the 2011 Tohoku-Oki earthquake (Mw 9.1). It is notable that the very deep focus events, the 1994 Bolivia earthquake (Mw 8.2) and the 2013 Okhotsk earthquake (Mw 8.3), produced significant overall coseismic Eg gain according to our calculation. The accumulative coseismic Eg is mainly lost in the mantle of the Earth and also lost in the core of the Earth but with a relatively smaller magnitude. By contrast, the crust of the Earth gains gravitational potential energy cumulatively because of the coseismic deformations. We further investigate the tectonic signature in the coseismic crustal Eg changes in some complex tectonic zone, such as Taiwan region and the northeastern margin of the Tibetan Plateau. We found that the coseismic Eg change is consistent with the regional tectonic character.

  17. Oceanic transform faults: how and why do they form? (Invited)

    NASA Astrophysics Data System (ADS)

    Gerya, T.

    2013-12-01

    transform faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps. The ridge instability is governed by rheological weakening of active fault structures. The instability is most efficient for slow to intermediate spreading rates, whereas ultraslow and (ultra)fast spreading rates tend to destabilize transform faults (Gerya, 2010; Püthe and Gerya, 2013) References Gerya, T. (2010) Dynamical instability produces transform faults at mid-ocean ridges. Science, 329, 1047-1050. Gerya, T. (2012) Origin and models of oceanic transform faults. Tectonophys., 522-523, 34-56 Gerya, T.V. (2013a) Three-dimensional thermomechanical modeling of oceanic spreading initiation and evolution. Phys. Earth Planet. Interiors, 214, 35-52. Gerya, T.V. (2013b) Initiation of transform faults at rifted continental margins: 3D petrological-thermomechanical modeling and comparison to the Woodlark Basin. Petrology, 21, 1-10. Püthe, C., Gerya, T.V. (2013) Dependence of mid-ocean ridge morphology on spreading rate in numerical 3-D models. Gondwana Res., DOI: http://dx.doi.org/10.1016/j.gr.2013.04.005 Taylor, B., Goodliffe, A., Martinez, F. (2009) Initiation of transform faults at rifted continental margins. Comptes Rendus Geosci., 341, 428-438.

  18. Review: Evaluation of Foot-and-Mouth Disease Control Using Fault Tree Analysis.

    PubMed

    Isoda, N; Kadohira, M; Sekiguchi, S; Schuppers, M; Stärk, K D C

    2015-06-01

    An outbreak of foot-and-mouth disease (FMD) causes huge economic losses and animal welfare problems. Although much can be learnt from past FMD outbreaks, several countries are not satisfied with their degree of contingency planning and aiming at more assurance that their control measures will be effective. The purpose of the present article was to develop a generic fault tree framework for the control of an FMD outbreak as a basis for systematic improvement and refinement of control activities and general preparedness. Fault trees are typically used in engineering to document pathways that can lead to an undesired event, that is, ineffective FMD control. The fault tree method allows risk managers to identify immature parts of the control system and to analyse the events or steps that will most probably delay rapid and effective disease control during a real outbreak. The present developed fault tree is generic and can be tailored to fit the specific needs of countries. For instance, the specific fault tree for the 2001 FMD outbreak in the UK was refined based on control weaknesses discussed in peer-reviewed articles. Furthermore, the specific fault tree based on the 2001 outbreak was applied to the subsequent FMD outbreak in 2007 to assess the refinement of control measures following the earlier, major outbreak. The FMD fault tree can assist risk managers to develop more refined and adequate control activities against FMD outbreaks and to find optimum strategies for rapid control. Further application using the current tree will be one of the basic measures for FMD control worldwide. © 2013 Blackwell Verlag GmbH.

  19. Critical fault patterns determination in fault-tolerant computer systems

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Losq, J.

    1978-01-01

    The method proposed tries to enumerate all the critical fault-patterns (successive occurrences of failures) without analyzing every single possible fault. The conditions for the system to be operating in a given mode can be expressed in terms of the static states. Thus, one can find all the system states that correspond to a given critical mode of operation. The next step consists in analyzing the fault-detection mechanisms, the diagnosis algorithm and the process of switch control. From them, one can find all the possible system configurations that can result from a failure occurrence. Thus, one can list all the characteristics, with respect to detection, diagnosis, and switch control, that failures must have to constitute critical fault-patterns. Such an enumeration of the critical fault-patterns can be directly used to evaluate the overall system tolerance to failures. Present research is focused on how to efficiently make use of these system-level characteristics to enumerate all the failures that verify these characteristics.

  20. ISHM-oriented adaptive fault diagnostics for avionics based on a distributed intelligent agent system

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zhong, Zhengqiang; Xu, Lei

    2015-10-01

    In this paper, an integrated system health management-oriented adaptive fault diagnostics and model for avionics is proposed. With avionics becoming increasingly complicated, precise and comprehensive avionics fault diagnostics has become an extremely complicated task. For the proposed fault diagnostic system, specific approaches, such as the artificial immune system, the intelligent agents system and the Dempster-Shafer evidence theory, are used to conduct deep fault avionics diagnostics. Through this proposed fault diagnostic system, efficient and accurate diagnostics can be achieved. A numerical example is conducted to apply the proposed hybrid diagnostics to a set of radar transmitters on an avionics system and to illustrate that the proposed system and model have the ability to achieve efficient and accurate fault diagnostics. By analyzing the diagnostic system's feasibility and pragmatics, the advantages of this system are demonstrated.

  1. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  2. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    NASA Astrophysics Data System (ADS)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  3. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  4. Faulting and groundwater in a desert environment: constraining hydrogeology using time-domain electromagnetic data

    USGS Publications Warehouse

    Bedrosian, Paul A.; Burgess, Matthew K.; Nishikawa, Tracy

    2013-01-01

    Within the south-western Mojave Desert, the Joshua Basin Water District is considering applying imported water into infiltration ponds in the Joshua Tree groundwater sub-basin in an attempt to artificially recharge the underlying aquifer. Scarce subsurface hydrogeological data are available near the proposed recharge site; therefore, time-domain electromagnetic (TDEM) data were collected and analysed to characterize the subsurface. TDEM soundings were acquired to estimate the depth to water on either side of the Pinto Mountain Fault, a major east-west trending strike-slip fault that transects the proposed recharge site. While TDEM is a standard technique for groundwater investigations, special care must be taken when acquiring and interpreting TDEM data in a twodimensional (2D) faulted environment. A subset of the TDEM data consistent with a layered-earth interpretation was identified through a combination of three-dimensional (3D) forward modelling and diffusion time-distance estimates. Inverse modelling indicates an offset in water table elevation of nearly 40 m across the fault. These findings imply that the fault acts as a low-permeability barrier to groundwater flow in the vicinity of the proposed recharge site. Existing production wells on the south side of the fault, together with a thick unsaturated zone and permeable near-surface deposits, suggest the southern half of the study area is suitable for artificial recharge. These results illustrate the effectiveness of targeted TDEM in support of hydrological studies in a heavily faulted desert environment where data are scarce and the cost of obtaining these data by conventional drilling techniques is prohibitive.

  5. Scissoring Fault Rupture Properties along the Median Tectonic Line Fault Zone, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Nishizaka, N.; Onishi, K.; Sakamoto, J.; Takahashi, K.

    2017-12-01

    The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

  6. Applications of Earth Observations for Fisheries Management: An analysis of socioeconomic benefits

    NASA Astrophysics Data System (ADS)

    Friedl, L.; Kiefer, D. A.; Turner, W.

    2013-12-01

    This paper will discuss the socioeconomic impacts of a project applying Earth observations and models to support management and conservation of tuna and other marine resources in the eastern Pacific Ocean. A project team created a software package that produces statistical analyses and dynamic maps of habitat for pelagic ocean biota. The tool integrates sea surface temperature and chlorophyll imagery from MODIS, ocean circulation models, and other data products. The project worked with the Inter-American Tropical Tuna Commission, which issues fishery management information, such as stock assessments, for the eastern Pacific region. The Commission uses the tool and broader habitat information to produce better estimates of stock and thus improve their ability to identify species that could be at risk of overfishing. The socioeconomic analysis quantified the relative value that Earth observations contributed to accurate stock size assessments through improvements in calculating population size. The analysis team calculated the first-order economic costs of a fishery collapse (or shutdown), and they calculated the benefits of improved estimates that reduce the uncertainty of stock size and thus reduce the risk of fishery collapse. The team estimated that the project reduced the probability of collapse of different fisheries, and the analysis generated net present values of risk mitigation. USC led the project with sponsorship from the NASA Earth Science Division's Applied Sciences Program, which conducted the socioeconomic impact analysis. The paper will discuss the project and focus primarily on the analytic methods, impact metrics, and the results of the socioeconomic benefits analysis.

  7. A Genetic Algorithm Method for Direct estimation of paleostress states from heterogeneous fault-slip observations

    NASA Astrophysics Data System (ADS)

    Srivastava, D. C.

    2016-12-01

    A Genetic Algorithm Method for Direct estimation of paleostress states from heterogeneous fault-slip observationsDeepak C. Srivastava, Prithvi Thakur and Pravin K. GuptaDepartment of Earth Sciences, Indian Institute of Technology Roorkee, Roorkee 247667, India. Abstract Paleostress estimation from a group of heterogeneous fault-slip observations entails first the classification of the observations into homogeneous fault sets and then a separate inversion of each homogeneous set. This study combines these two issues into a nonlinear inverse problem and proposes a heuristic search method that inverts the heterogeneous fault-slip observations. The method estimates different paleostress states in a group of heterogeneous fault-slip observations and classifies it into homogeneous sets as a byproduct. It uses the genetic algorithm operators, elitism, selection, encoding, crossover and mutation. These processes translate into a guided search that finds successively fitter solutions and operate iteratively until the termination criteria is met and the globally fittest stress tensors are obtained. We explain the basic steps of the algorithm on a working example and demonstrate validity of the method on several synthetic and a natural group of heterogeneous fault-slip observations. The method is independent of any user-defined bias or any entrapment of solution in a local optimum. It succeeds even in the difficult situations where other classification methods are found to fail.

  8. "Handling" seismic hazard: 3D printing of California Faults

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Potter, M.; Richards-Dinger, K. B.

    2017-12-01

    As earth scientists, we face the challenge of how to explain and represent our work and achievements to the general public. Nowadays, this problem is partially alleviated by the use of modern visualization tools such as advanced scientific software (Paraview.org), high resolution monitors, elaborate video simulations, and even 3D Virtual Reality goggles. However, the ability to manipulate and examine a physical object in 3D is still an important tool to connect better with the public. For that reason, we are presenting a scaled 3D printed version of the complex network of earthquake faults active in California based on that used by the Uniform California Earthquake Rupture Forecast 3 (UCERF3) (Field et al., 2013). We start from the fault geometry in the UCERF3.1 deformation model files. These files contain information such as the coordinates of the surface traces of the faults, dip angle, and depth extent. The fault specified in the above files are triangulated at 1km resolution and exported as a facet (.fac) file. The facet file is later imported into the Trelis 15.1 mesh generator (csimsoft.com). We use Trelis to perform the following three operations: First, we scale down the model so that 100 mm corresponds to 100km. Second, we "thicken" the walls of the faults; wall thickness of at least 1mm is necessary in 3D printing. We thicken fault geometry by 1mm on each side of the faults for a total of 2mm thickness. Third, we break down the model into parts that will fit the printing bed size ( 25 x 20mm). Finally, each part is exported in stereolithography format (.stl). For our project, we are using the 3D printing facility within the Creat'R Lab in the UC Riverside Orbach Science Library. The 3D printer is a MakerBot Replicator Desktop, 5th Generation. The resolution of print is 0.2mm (Standard quality). The printing material is the MakerBot PLA Filament, 1.75 mm diameter, large Spool, green. The most complex part of the display model requires approximately 17

  9. Fault detection and multiclassifier fusion for unmanned aerial vehicles (UAVs)

    NASA Astrophysics Data System (ADS)

    Yan, Weizhong

    2001-03-01

    UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.

  10. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  11. A Virtual Tour of the 1868 Hayward Earthquake in Google EarthTM

    NASA Astrophysics Data System (ADS)

    Lackey, H. G.; Blair, J. L.; Boatwright, J.; Brocher, T.

    2007-12-01

    The 1868 Hayward earthquake has been overshadowed by the subsequent 1906 San Francisco earthquake that destroyed much of San Francisco. Nonetheless, a modern recurrence of the 1868 earthquake would cause widespread damage to the densely populated Bay Area, particularly in the east Bay communities that have grown up virtually on top of the Hayward fault. Our concern is heightened by paleoseismic studies suggesting that the recurrence interval for the past five earthquakes on the southern Hayward fault is 140 to 170 years. Our objective is to build an educational web site that illustrates the cause and effect of the 1868 earthquake drawing on scientific and historic information. We will use Google EarthTM software to visually illustrate complex scientific concepts in a way that is understandable to a non-scientific audience. This web site will lead the viewer from a regional summary of the plate tectonics and faulting system of western North America, to more specific information about the 1868 Hayward earthquake itself. Text and Google EarthTM layers will include modeled shaking of the earthquake, relocations of historic photographs, reconstruction of damaged buildings as 3-D models, and additional scientific data that may come from the many scientific studies conducted for the 140th anniversary of the event. Earthquake engineering concerns will be stressed, including population density, vulnerable infrastructure, and lifelines. We will also present detailed maps of the Hayward fault, measurements of fault creep, and geologic evidence of its recurrence. Understanding the science behind earthquake hazards is an important step in preparing for the next significant earthquake. We hope to communicate to the public and students of all ages, through visualizations, not only the cause and effect of the 1868 earthquake, but also modern seismic hazards of the San Francisco Bay region.

  12. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  13. On the Possibility of Estimation of the Earth Crust's Properties from the Observations of Electric Field of Electrokinetic Origin, Generated by Tidal Deformation within the Fault Zone

    NASA Astrophysics Data System (ADS)

    Alekseev, D. A.; Gokhberg, M. B.

    2018-05-01

    A 2-D boundary problem formulation in terms of pore pressure in Biot poroelasticity model is discussed, with application to a vertical contact model mechanically excited by a lunar-solar tidal deformation wave, representing a fault zone structure. A problem parametrization in terms of permeability and Biot's modulus contrasts is proposed and its numerical solution is obtained for a series of models differing in the values of the above parameters. The behavior of pore pressure and its gradient is analyzed. From those, the electric field of the electrokinetic nature is calculated. The possibilities of estimation of the elastic properties and permeability of geological formations from the observations of the horizontal and vertical electric field measured inside the medium and at the earth's surface near the block boundary are discussed.

  14. Structural analysis of three extensional detachment faults with data from the 2000 Space-Shuttle Radar Topography Mission

    USGS Publications Warehouse

    Spencer, J.E.

    2010-01-01

    The Space-Shuttle Radar Topography Mission provided geologists with a detailed digital elevation model of most of Earth's land surface. This new database is used here for structural analysis of grooved surfaces interpreted to be the exhumed footwalls of three active or recently active extensional detachment faults. Exhumed fault footwalls, each with an areal extent of one hundred to several hundred square kilometers, make up much of Dayman dome in eastern Papua New Guinea, the western Gurla Mandhata massif in the central Himalaya, and the northern Tokorondo Mountains in central Sulawesi, Indonesia. Footwall curvature in profile varies from planar to slightly convex upward at Gurla Mandhata to strongly convex upward at northwestern Dayman dome. Fault curvature decreases away from the trace of the bounding detachment fault in western Dayman dome and in the Tokorondo massif, suggesting footwall flattening (reduction in curvature) following exhumation. Grooves of highly variable wavelength and amplitude reveal extension direction, although structural processes of groove genesis may be diverse.

  15. Pressure Monitoring to Detect Fault Rupture Due to CO 2 Injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keating, Elizabeth; Dempsey, David; Pawar, Rajesh

    The capacity for fault systems to be reactivated by fluid injection is well-known. In the context of CO 2 sequestration, however, the consequence of reactivated faults with respect to leakage and monitoring is poorly understood. Using multi-phase fluid flow simulations, this study addresses key questions concerning the likelihood of ruptures, the timing of consequent upward leakage of CO 2, and the effectiveness of pressure monitoring in the reservoir and overlying zones for rupture detection. A range of injection scenarios was simulated using random sampling of uncertain parameters. These include the assumed distance between the injector and the vulnerable fault zone,more » the critical overpressure required for the fault to rupture, reservoir permeability, and the CO 2 injection rate. We assumed a conservative scenario, in which if at any time during the five-year simulations the critical fault overpressure is exceeded, the fault permeability is assumed to instantaneously increase. For the purposes of conservatism we assume that CO 2 injection continues ‘blindly’ after fault rupture. We show that, despite this assumption, in most cases the CO 2 plume does not reach the base of the ruptured fault after 5 years. As a result, one possible implication of this result is that leak mitigation strategies such as pressure management have a reasonable chance of preventing a CO 2 leak.« less

  16. Pressure Monitoring to Detect Fault Rupture Due to CO 2 Injection

    DOE PAGES

    Keating, Elizabeth; Dempsey, David; Pawar, Rajesh

    2017-08-18

    The capacity for fault systems to be reactivated by fluid injection is well-known. In the context of CO 2 sequestration, however, the consequence of reactivated faults with respect to leakage and monitoring is poorly understood. Using multi-phase fluid flow simulations, this study addresses key questions concerning the likelihood of ruptures, the timing of consequent upward leakage of CO 2, and the effectiveness of pressure monitoring in the reservoir and overlying zones for rupture detection. A range of injection scenarios was simulated using random sampling of uncertain parameters. These include the assumed distance between the injector and the vulnerable fault zone,more » the critical overpressure required for the fault to rupture, reservoir permeability, and the CO 2 injection rate. We assumed a conservative scenario, in which if at any time during the five-year simulations the critical fault overpressure is exceeded, the fault permeability is assumed to instantaneously increase. For the purposes of conservatism we assume that CO 2 injection continues ‘blindly’ after fault rupture. We show that, despite this assumption, in most cases the CO 2 plume does not reach the base of the ruptured fault after 5 years. As a result, one possible implication of this result is that leak mitigation strategies such as pressure management have a reasonable chance of preventing a CO 2 leak.« less

  17. Fault-zone structure and weakening processes in basin-scale reverse faults: The Moonlight Fault Zone, South Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Alder, S.; Smith, S. A. F.; Scott, J. M.

    2016-10-01

    The >200 km long Moonlight Fault Zone (MFZ) in southern New Zealand was an Oligocene basin-bounding normal fault zone that reactivated in the Miocene as a high-angle reverse fault (present dip angle 65°-75°). Regional exhumation in the last c. 5 Ma has resulted in deep exposures of the MFZ that present an opportunity to study the structure and deformation processes that were active in a basin-scale reverse fault at basement depths. Syn-rift sediments are preserved only as thin fault-bound slivers. The hanging wall and footwall of the MFZ are mainly greenschist facies quartzofeldspathic schists that have a steeply-dipping (55°-75°) foliation subparallel to the main fault trace. In more fissile lithologies (e.g. greyschists), hanging-wall deformation occurred by the development of foliation-parallel breccia layers up to a few centimetres thick. Greyschists in the footwall deformed mainly by folding and formation of tabular, foliation-parallel breccias up to 1 m wide. Where the hanging-wall contains more competent lithologies (e.g. greenschist facies metabasite) it is laced with networks of pseudotachylyte that formed parallel to the host rock foliation in a damage zone extending up to 500 m from the main fault trace. The fault core contains an up to 20 m thick sequence of breccias, cataclasites and foliated cataclasites preserving evidence for the progressive development of interconnected networks of (partly authigenic) chlorite and muscovite. Deformation in the fault core occurred by cataclasis of quartz and albite, frictional sliding of chlorite and muscovite grains, and dissolution-precipitation. Combined with published friction and permeability data, our observations suggest that: 1) host rock lithology and anisotropy were the primary controls on the structure of the MFZ at basement depths and 2) high-angle reverse slip was facilitated by the low frictional strength of fault core materials. Restriction of pseudotachylyte networks to the hanging-wall of the

  18. Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models

    NASA Astrophysics Data System (ADS)

    Styron, R. H.; Hetland, E. A.

    2014-12-01

    Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on

  19. AGSM Functional Fault Models for Fault Isolation Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project implements functional fault models to automate the isolation of failures during ground systems operations. FFMs will also be used to recommend sensor placement to improve fault isolation capabilities. The project enables the delivery of system health advisories to ground system operators.

  20. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  1. Seismic images and fault relations of the Santa Monica thrust fault, West Los Angeles, California

    USGS Publications Warehouse

    Catchings, R.D.; Gandhok, G.; Goldman, M.R.; Okaya, D.

    2001-01-01

    In May 1997, the US Geological Survey (USGS) and the University of Southern California (USC) acquired high-resolution seismic reflection and refraction images on the grounds of the Wadsworth Veterans Administration Hospital (WVAH) in the city of Los Angeles (Fig. 1a,b). The objective of the seismic survey was to better understand the near-surface geometry and faulting characteristics of the Santa Monica fault zone. In this report, we present seismic images, an interpretation of those images, and a comparison of our results with results from studies by Dolan and Pratt (1997), Pratt et al. (1998) and Gibbs et al. (2000). The Santa Monica fault is one of the several northeast-southwest-trending, north-dipping, reverse faults that extend through the Los Angeles metropolitan area (Fig. 1a). Through much of area, the Santa Monica fault trends subparallel to the Hollywood fault, but the two faults apparently join into a single fault zone to the southwest and to the northeast (Dolan et al., 1995). The Santa Monica and Hollywood faults may be part of a larger fault system that extends from the Pacific Ocean to the Transverse Ranges. Crook et al. (1983) refer to this fault system as the Malibu Coast-Santa Monica-Raymond-Cucamonga fault system. They suggest that these faults have not formed a contiguous zone since the Pleistocene and conclude that each of the faults should be treated as a separate fault with respect to seismic hazards. However, Dolan et al. (1995) suggest that the Hollywood and Santa Monica faults are capable of generating Mw 6.8 and Mw 7.0 earthquakes, respectively. Thus, regardless of whether the overall fault system is connected and capable of rupturing in one event, individually, each of the faults present a sizable earthquake hazard to the Los Angeles metropolitan area. If, however, these faults are connected, and they were to rupture along a continuous fault rupture, the resulting hazard would be even greater. Although the Santa Monica fault represents

  2. A PC based fault diagnosis expert system

    NASA Technical Reports Server (NTRS)

    Marsh, Christopher A.

    1990-01-01

    The Integrated Status Assessment (ISA) prototype expert system performs system level fault diagnosis using rules and models created by the user. The ISA evolved from concepts to a stand-alone demonstration prototype using OPS5 on a LISP Machine. The LISP based prototype was rewritten in C and the C Language Integrated Production System (CLIPS) to run on a Personal Computer (PC) and a graphics workstation. The ISA prototype has been used to demonstrate fault diagnosis functions of Space Station Freedom's Operation Management System (OMS). This paper describes the development of the ISA prototype from early concepts to the current PC/workstation version used today and describes future areas of development for the prototype.

  3. Determination of Aseismic Creep or Strain Field on the Main Marmara Fault

    NASA Astrophysics Data System (ADS)

    Özbey, V.; Yavasoglu, H.; Masson, F.; Klein, E.; Alkan, M. N.; Alkan, R. M.

    2016-12-01

    Plate motion affecting the Earth's crust have occurred for millions of years. Determination of strain accumulation based on the plate motion is commonly monitored with GPS in recent years. The North Anatolian Fault (NAF) Zone, which is one of the fastest faults in the world, extends along all North Anatolia from Bingöl to Saros Gulf. Several destructive earthquakes occurred there, such as Izmit (in 1999, Mw=7.4) and Duzce (in 1999, Mw=7.2) in last century. The NAFZ is dividing into southern and northern branches to the east of Marmara region and the Northern branch (Main Marmara Fault-MMF) is crossing the Marmara Sea, starting in from the Gulf of Izmit - Adapazarı and reaching the Gulf of Saros. According to recent studies, the MMF is the largest unbroken part of the fault and is divided into segments (among which the Central Marmara-CM and Prince's Island-PI segments). The determination of the deformation accumulated on the MMF has become extremely important especially after the 1999 Izmit earthquake. Recent studies have demonstrated that the Prince's Island segment is fully locked. However, studies that are focused on the Central Marmara segment, that is located offshore Istanbul, a giant metropole that has more than 14 million population, do not conclude about the presence of a seismic gap, capable of generating a big earthquake. Therefore, in the scope of this study, a new GPS network will be established at short and long distance from the Main Marmara Fault, to densify the existing GPS network. several campaign measurements will be necessary to compute a velocity field. The velocity field will reveal the compression and variations of accumulation rate on the fault. Also, the amount of aseismic creep deep within the fault will be determined using Elastic Displacement Modeling method, allowing to conclude about the existence of a seismic gap on the Main Marmara Fault originated from aseismic deformation or not.

  4. Thermo-Hydro-Micro-Mechanical 3D Modeling of a Fault Gouge During Co-seismic Slip

    NASA Astrophysics Data System (ADS)

    Papachristos, E.; Stefanou, I.; Sulem, J.; Donze, F. V.

    2017-12-01

    A coupled Thermo-Hydro-Micro-Mechanical (THMM) model based on the Discrete Elements method (DEM) is presented for studying the evolving fault gouge properties during pre- and co-seismic slip. Modeling the behavior of the fault gouge at the microscale is expected to improve our understanding on the various mechanisms that lead to slip weakening and finally control the transition from aseismic to seismic slip.The gouge is considered as a granular material of spherical particles [1]. Upon loading, the interactions between particles follow a frictional behavior and explicit dynamics. Using regular triangulation, a pore network is defined by the physical pore space between the particles. The network is saturated by a compressible fluid, and flow takes place following Stoke's equations. Particles' movement leads to pore deformation and thus to local pore pressure increase. Forces exerted from the fluid onto the particles are calculated using mid-step velocities. The fluid forces are then added to the contact forces resulting from the mechanical interactions before the next step.The same semi-implicit, two way iterative coupling is used for the heat-exchange through conduction.Simple tests have been performed to verify the model against analytical solutions and experimental results. Furthermore, the model was used to study the effect of temperature on the evolution of effective stress in the system and to highlight the role of thermal pressurization during seismic slip [2, 3].The analyses are expected to give grounds for enhancing the current state-of-the-art constitutive models regarding fault friction and shed light on the evolution of fault zone propertiesduring seismic slip.[1] Omid Dorostkar, Robert A Guyer, Paul A Johnson, Chris Marone, and Jan Carmeliet. On the role of fluids in stick-slip dynamics of saturated granular fault gouge using a coupled computational fluid dynamics-discrete element approach. Journal of Geophysical Research: Solid Earth, 122

  5. Integrating LiDAR Data into Earth Science Education

    NASA Astrophysics Data System (ADS)

    Robinson, S. E.; Arrowsmith, R.; de Groot, R. M.; Crosby, C. J.; Whitesides, A. S.; Colunga, J.

    2010-12-01

    The use of high-resolution topography derived from Light Detection and Ranging (LiDAR) in the study of active tectonics is widespread and has become an indispensable tool to better understand earthquake hazards. For this reason and the spectacular representation of the phenomena the data provide, it is appropriate to integrate these data into the Earth science education curriculum. A collaboration between Arizona State University, the OpenTopography Facility, and the Southern California Earthquake Center are developing, three earth science education products to inform students and other audiences about LiDAR and its application to active tectonics research. First, a 10-minute introductory video titled LiDAR: Illuminating Earthquakes was produced and is freely available online through the OpenTopography portal and SCEC. The second product is an update and enhancement of the Wallace Creek Interpretive Trail website (www.scec.org/wallacecreek). LiDAR topography data products have been added along with the development of a virtual tour of the offset channels at Wallace Creek using the B4 LiDAR data within the Google Earth environment. The virtual tour to Wallace Creek is designed as a lab activity for introductory undergraduate geology courses to increase understanding of earthquake hazards through exploration of the dramatic offset created by the San Andreas Fault (SAF) at Wallace Creek and Global Positioning System-derived displacements spanning the SAF at Wallace Creek . This activity is currently being tested in courses at Arizona State University. The goal of the assessment is to measure student understanding of plate tectonics and earthquakes after completing the activity. Including high-resolution topography LiDAR data into the earth science education curriculum promotes understanding of plate tectonics, faults, and other topics related to earthquake hazards.

  6. Model-based fault detection and isolation for intermittently active faults with application to motion-based thruster fault detection and isolation for spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2008-01-01

    The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.

  7. Structural Health and Prognostics Management for Offshore Wind Turbines: Sensitivity Analysis of Rotor Fault and Blade Damage with O&M Cost Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myrent, Noah J.; Barrett, Natalie C.; Adams, Douglas E.

    2014-07-01

    Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by themore » presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine

  8. Slip rates and spatially variable creep on faults of the northern San Andreas system inferred through Bayesian inversion of Global Positioning System data

    USGS Publications Warehouse

    Murray, Jessica R.; Minson, Sarah E.; Svarc, Jerry L.

    2014-01-01

    Fault creep, depending on its rate and spatial extent, is thought to reduce earthquake hazard by releasing tectonic strain aseismically. We use Bayesian inversion and a newly expanded GPS data set to infer the deep slip rates below assigned locking depths on the San Andreas, Maacama, and Bartlett Springs Faults of Northern California and, for the latter two, the spatially variable interseismic creep rate above the locking depth. We estimate deep slip rates of 21.5 ± 0.5, 13.1 ± 0.8, and 7.5 ± 0.7 mm/yr below 16 km, 9 km, and 13 km on the San Andreas, Maacama, and Bartlett Springs Faults, respectively. We infer that on average the Bartlett Springs fault creeps from the Earth's surface to 13 km depth, and below 5 km the creep rate approaches the deep slip rate. This implies that microseismicity may extend below the locking depth; however, we cannot rule out the presence of locked patches in the seismogenic zone that could generate moderate earthquakes. Our estimated Maacama creep rate, while comparable to the inferred deep slip rate at the Earth's surface, decreases with depth, implying a slip deficit exists. The Maacama deep slip rate estimate, 13.1 mm/yr, exceeds long-term geologic slip rate estimates, perhaps due to distributed off-fault strain or the presence of multiple active fault strands. While our creep rate estimates are relatively insensitive to choice of model locking depth, insufficient independent information regarding locking depths is a source of epistemic uncertainty that impacts deep slip rate estimates.

  9. Seismic Hazard and Fault Length

    NASA Astrophysics Data System (ADS)

    Black, N. M.; Jackson, D. D.; Mualchin, L.

    2005-12-01

    If mx is the largest earthquake magnitude that can occur on a fault, then what is mp, the largest magnitude that should be expected during the planned lifetime of a particular structure? Most approaches to these questions rely on an estimate of the Maximum Credible Earthquake, obtained by regression (e.g. Wells and Coppersmith, 1994) of fault length (or area) and magnitude. Our work differs in two ways. First, we modify the traditional approach to measuring fault length, to allow for hidden fault complexity and multi-fault rupture. Second, we use a magnitude-frequency relationship to calculate the largest magnitude expected to occur within a given time interval. Often fault length is poorly defined and multiple faults rupture together in a single event. Therefore, we need to expand the definition of a mapped fault length to obtain a more accurate estimate of the maximum magnitude. In previous work, we compared fault length vs. rupture length for post-1975 earthquakes in Southern California. In this study, we found that mapped fault length and rupture length are often unequal, and in several cases rupture broke beyond the previously mapped fault traces. To expand the geologic definition of fault length we outlined several guidelines: 1) if a fault truncates at young Quaternary alluvium, the fault line should be inferred underneath the younger sediments 2) faults striking within 45° of one another should be treated as a continuous fault line and 3) a step-over can link together faults at least 5 km apart. These definitions were applied to fault lines in Southern California. For example, many of the along-strike faults lines in the Mojave Desert are treated as a single fault trending from the Pinto Mountain to the Garlock fault. In addition, the Rose Canyon and Newport-Inglewood faults are treated as a single fault line. We used these more generous fault lengths, and the Wells and Coppersmith regression, to estimate the maximum magnitude (mx) for the major faults in

  10. A multilayer model of time dependent deformation following an earthquake on a strike-slip fault

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1981-01-01

    A multilayer model of the Earth to calculate finite element of time dependent deformation and stress following an earthquake on a strike slip fault is discussed. The model involves shear properties of an elastic upper lithosphere, a standard viscoelastic linear solid lower lithosphere, a Maxwell viscoelastic asthenosphere and an elastic mesosphere. Systematic variations of fault and layer depths and comparisons with simpler elastic lithosphere over viscoelastic asthenosphere calculations are analyzed. Both the creep of the lower lithosphere and astenosphere contribute to the postseismic deformation. The magnitude of the deformation is enhanced by a short distance between the bottom of the fault (slip zone) and the top of the creep region but is less sensitive to the thickness of the creeping layer. Postseismic restressing is increased as the lower lithosphere becomes more viscoelastic, but the tendency for the width of the restressed zone to growth with time is retarded.

  11. Semantics-enabled knowledge management for global Earth observation system of systems

    NASA Astrophysics Data System (ADS)

    King, Roger L.; Durbha, Surya S.; Younan, Nicolas H.

    2007-10-01

    The Global Earth Observation System of Systems (GEOSS) is a distributed system of systems built on current international cooperation efforts among existing Earth observing and processing systems. The goal is to formulate an end-to-end process that enables the collection and distribution of accurate, reliable Earth Observation data, information, products, and services to both suppliers and consumers worldwide. One of the critical components in the development of such systems is the ability to obtain seamless access of data across geopolitical boundaries. In order to gain support and willingness to participate by countries around the world in such an endeavor, it is necessary to devise mechanisms whereby the data and the intellectual capital is protected through procedures that implement the policies specific to a country. Earth Observations (EO) are obtained from a multitude of sources and requires coordination among different agencies and user groups to come to a shared understanding on a set of concepts involved in a domain. It is envisaged that the data and information in a GEOSS context will be unprecedented and the current data archiving and delivery methods need to be transformed into one that allows realization of seamless interoperability. Thus, EO data integration is dependent on the resolution of conflicts arising from a variety of areas. Modularization is inevitable in distributed environments to facilitate flexible and efficient reuse of existing ontologies. Therefore, we propose a framework for modular ontologies based knowledge management approach for GEOSS and present methods to enable efficient reasoning in such systems.

  12. Developing sub 5-m LiDAR DEMs for forested sections of the Alpine and Hope faults, South Island, New Zealand: Implications for structural interpretations

    NASA Astrophysics Data System (ADS)

    Langridge, R. M.; Ries, W. F.; Farrier, T.; Barth, N. C.; Khajavi, N.; De Pascale, G. P.

    2014-07-01

    Kilometre-wide airborne light detection and ranging (LiDAR) surveys were collected along portions of the Alpine and Hope faults in New Zealand to assess the potential for generating sub 5-m bare earth digital elevation models (DEMs) from ground return data in areas of dense rainforest (bush) cover as an aid to mapping these faults. The 34-km long Franz-Whataroa LiDAR survey was flown along the densely-vegetated central-most portion of the transpressive Alpine Fault. Six closely spaced flight lines (200 m apart) yielded survey coverage with double overlap of swath collection, which was considered necessary due to the low density of ground returns (0.16 m-2 or a point every 6 m2) under mature West Coast podocarp-broadleaf rainforest. This average point spacing (˜2.5 m) allowed for the generation of a robust, high quality 3-m bare earth DEM. The DEM confirmed the zigzagged form of the surface trace of the Alpine Fault in this area, originally recognised by Norris and Cooper (1995, 1997) and highlights that the surface strike variations are more variant than previously mapped. The 29-km long Hurunui-Hope LiDAR survey was flown east of the Main Divide of the Southern Alps along the dextral-slip Hope Fault, where the terrain is characterised by lower rainfall and more open beech forest. Flight line spacings of ˜275 m were used to generate a DEM from the ground return data. The average ground return values under beech forest were 0.27 m-2 and yielded an estimated cell size suitable for a 2-m DEM. In both cases the LiDAR revealed unprecedented views of the surface geomorphology of these active faults. Lessons learned from our survey methodologies can be employed to plan cost-effective, high-gain airborne surveys to yield bare earth DEMs underneath vegetated terrain and multi-storeyed canopies from densely forested environments across New Zealand and worldwide.

  13. The role of faulting on surface deformation patterns from pumping-induced groundwater flow (Las Vegas Valley, USA)

    NASA Astrophysics Data System (ADS)

    Hernandez-Marin, Martin; Burbey, Thomas J.

    2009-12-01

    Land subsidence and earth fissuring can cause damage in semiarid urbanized valleys where pumping exceeds natural recharge. In places such as Las Vegas Valley (USA), Quaternary faults play an important role in the surface deformation patterns by constraining the migration of land subsidence and creating complex relationships with surface fissures. These fissures typically result from horizontal displacements that occur in zones where extensional stress derived from groundwater flow exceeds the tensile strength of the near-surface sediments. A series of hypothetical numerical models, using the finite-element code ABAQUS and based on the observed conditions of the Eglington Fault zone, were developed. The models reproduced the (1) long-term natural recharge and discharge, (2) heavy pumping and (3) incorporation of artificial recharge that reflects the conditions of Las Vegas Valley. The simulated hydrostratigraphy consists of three aquifers, two aquitards and a relatively dry vadose zone, plus a normal fault zone that reflects the Quaternary Eglington fault. Numerical results suggest that a 100-m-wide fault zone composed of sand-like material produces: (1) conditions most similar to those observed in Las Vegas Valley and (2) the most favorable conditions for the development of fissures to form on the surface adjacent to the fault zone.

  14. Active faulting at Delphi, Greece: Seismotectonic remarks and a hypothesis for the geologic environment of a myth

    NASA Astrophysics Data System (ADS)

    Piccardi, Luigi

    2000-07-01

    Historical data are fundamental to the understanding of the seismic history of an area. At the same time, knowledge of the active tectonic processes allows us to understand how earthquakes have been perceived by past cultures. Delphi is one of the principal archaeological sites of Greece, the main oracle of Apollo. It was by far the most venerated oracle of the Greek ancient world. According to tradition, the mantic proprieties of the oracle were obtained from an open chasm in the earth. Delphi is directly above one of the main antithetic active faults of the Gulf of Corinth Rift, which bounds Mount Parnassus to the south. The geometry of the fault and slip-parallel lineations on the main fault plane indicate normal movement, with minor right-lateral slip component. Combining tectonic data, archaeological evidence, historical sources, and a reexamination of myths, it appears that the Helice earthquake of 373 B.C. ruptured not only the master fault of the Gulf of Corinth Rift at Helice, but also the antithetic fault at Delphi, similarly to the Corinth earthquake of 1981. Moreover, the presence of an active fault directly below the temples of the oldest sanctuary suggests that the mythological oracular chasm might well have been an ancient tectonic surface rupture.

  15. The Design of a Fault-Tolerant COTS-Based Bus Architecture for Space Applications

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Tai, Ann T.

    2000-01-01

    The high-performance, scalability and miniaturization requirements together with the power, mass and cost constraints mandate the use of commercial-off-the-shelf (COTS) components and standards in the X2000 avionics system architecture for deep-space missions. In this paper, we report our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. While the COTS standard IEEE 1394 adequately supports power management, high performance and scalability, its topological criteria impose restrictions on fault tolerance realization. To circumvent the difficulties, we derive a "stack-tree" topology that not only complies with the IEEE 1394 standard but also facilitates fault tolerance realization in a spaceborne system with limited dedicated resource redundancies. Moreover, by exploiting pertinent standard features of the 1394 interface which are not purposely designed for fault tolerance, we devise a comprehensive set of fault detection mechanisms to support the fault-tolerant bus architecture.

  16. The Chaîne des Puys and Limagne Fault World Heritage project: a global partnership for raising the profile of monogenetic volcanism and rifting

    NASA Astrophysics Data System (ADS)

    Olive-Garcia, C.

    2013-12-01

    The present Chaîne des Puys and Limagne Fault World Heritage project represents a global partnership for raising the profile of monogenetic volcanism and rifting. From the 19th Century the Chaîne des Puys and Limagne Fault have been at the centre of discussion about the nature of volcanoes, and the origin of rifts. Part of this interest was due to the action of landowners and government agents such as Montlosier and Desmarest (who first realised that the chain were volcanoes), and national leaders such as Napoleon I, who was instrumental in the visit of Humphrey Davey and Michael Farady in 1805. The chain features largely in Scrope's 'Considerations on v olcanoes' 1825, and of Bonney's 'Volcanoes their structure and significance' of 1899. The fault escarpment is discussed at length by Lyell in Principles of Geology (1830), although they did not recognise it yet as a rift. The area has seen the development of a modern scientific-government-private partnership in geoscience research and education that has developed in parallel with the growth of a earth science centre of excellence, now the Laboratoire Magmas et Volcans. In addition, local owners and users have taken an important part in the development of this partnership to help create a sustainable management of the area. Partnerships have been developed with other sites around the world to share best practice, especially in managing inhabited natural sites. For over 30 years the area has been part of the evolving Auvergne Region Natural Volcano Park, for five years the central Puy de Dôme is a 'Grande site de France', equivalent to a national monument. Educational attractions grew up first as private - scientific partnerships (e.g. Lemptégy, Volvic, Maison de la Pierre) and then with greater public input like Vulcania and the Puy de Dome. The channelling of visitors has been accomplished by improved access by bus, and a new cog-railway up the Puy de Dôme. I present an overview of the UNESCO project, and show

  17. Approach to Managing MeaSURES Data at the GSFC Earth Science Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Vollmer, Bruce; Kempler, Steven J.; Ramapriyan, Hampapuram K.

    2009-01-01

    A major need stated by the NASA Earth science research strategy is to develop long-term, consistent, and calibrated data and products that are valid across multiple missions and satellite sensors. (NASA Solicitation for Making Earth System data records for Use in Research Environments (MEaSUREs) 2006-2010) Selected projects create long term records of a given parameter, called Earth Science Data Records (ESDRs), based on mature algorithms that bring together continuous multi-sensor data. ESDRs, associated algorithms, vetted by the appropriate community, are archived at a NASA affiliated data center for archive, stewardship, and distribution. See http://measures-projects.gsfc.nasa.gov/ for more details. This presentation describes the NASA GSFC Earth Science Data and Information Services Center (GES DISC) approach to managing the MEaSUREs ESDR datasets assigned to GES DISC. (Energy/water cycle related and atmospheric composition ESDRs) GES DISC will utilize its experience to integrate existing and proven reusable data management components to accommodate the new ESDRs. Components include a data archive system (S4PA), a data discovery and access system (Mirador), and various web services for data access. In addition, if determined to be useful to the user community, the Giovanni data exploration tool will be made available to ESDRs. The GES DISC data integration methodology to be used for the MEaSUREs datasets is presented. The goals of this presentation are to share an approach to ESDR integration, and initiate discussions amongst the data centers, data managers and data providers for the purpose of gaining efficiencies in data management for MEaSUREs projects.

  18. Teleseismic body waves from dynamically rupturing shallow thrust faults: Are they opaque for surface-reflected phases?

    USGS Publications Warehouse

    Smith, D.E.; Aagaard, Brad T.; Heaton, T.H.

    2005-01-01

    We investigate whether a shallow-dipping thrust fault is prone to waveslip interactions via surface-reflected waves affecting the dynamic slip. If so, can these interactions create faults that are opaque to radiated energy? Furthermore, in this case of a shallow-dipping thrust fault, can incorrectly assuming a transparent fault while using dislocation theory lead to underestimates of seismic moment? Slip time histories are generated in three-dimensional dynamic rupture simulations while allowing for varying degrees of wave-slip interaction controlled by fault-friction models. Based on the slip time histories, P and SH seismograms are calculated for stations at teleseismic distances. The overburdening pressure caused by gravity eliminates mode I opening except at the tip of the fault near the surface; hence, mode I opening has no effect on the teleseismic signal. Normalizing by a Haskell-like traditional kinematic rupture, we find teleseismic peak-to-peak displacement amplitudes are approximately 1.0 for both P and SH waves, except for the unrealistic case of zero sliding friction. Zero sliding friction has peak-to-peak amplitudes of 1.6 for P and 2.0 for SH waves; the fault slip oscillates about its equilibrium value, resulting in a large nonzero (0.08 Hz) spectral peak not seen in other ruptures. These results indicate wave-slip interactions associated with surface-reflected phases in real earthquakes should have little to no effect on teleseismic motions. Thus, Haskell-like kinematic dislocation theory (transparent fault conditions) can be safety used to simulate teleseismic waveforms in the Earth.

  19. Map and database of Quaternary faults in Venezuela and its offshore regions

    USGS Publications Warehouse

    Audemard, F.A.; Machette, M.N.; Cox, J.W.; Dart, R.L.; Haller, K.M.

    2000-01-01

    As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.The project is sponsored by the International Lithosphere Program and funded by the USGS’s National Earthquake Hazards Reduction Program. The primary elements of the project are general supervision and interpretation of geologic/tectonic information, data compilation and entry for fault catalog, database design and management, and digitization and manipulation of data in †ARCINFO. For the compilation of data, we engaged experts in Quaternary faulting, neotectonics, paleoseismology, and seismology.

  20. Earthquake precursory events around epicenters and local active faults; the cases of two inland earthquakes in Iran

    NASA Astrophysics Data System (ADS)

    Valizadeh Alvan, H.; Mansor, S.; Haydari Azad, F.

    2012-12-01

    The possibility of earthquake prediction in the frame of several days to few minutes before its occurrence has stirred interest among researchers, recently. Scientists believe that the new theories and explanations of the mechanism of this natural phenomenon are trustable and can be the basis of future prediction efforts. During the last thirty years experimental researches resulted in some pre-earthquake events which are now recognized as confirmed warning signs (precursors) of past known earthquakes. With the advances in in-situ measurement devices and data analysis capabilities and the emergence of satellite-based data collectors, monitoring the earth's surface is now a regular work. Data providers are supplying researchers from all over the world with high quality and validated imagery and non-imagery data. Surface Latent Heat Flux (SLHF) or the amount of energy exchange in the form of water vapor between the earth's surface and atmosphere has been frequently reported as an earthquake precursor during the past years. The accumulated stress in the earth's crust during the preparation phase of earthquakes is said to be the main cause of temperature anomalies weeks to days before the main event and subsequent shakes. Chemical and physical interactions in the presence of underground water lead to higher water evaporation prior to inland earthquakes. On the other hand, the leak of Radon gas occurred as rocks break during earthquake preparation causes the formation of airborne ions and higher Air Temperature (AT) prior to main event. Although co-analysis of direct and indirect observation for precursory events is considered as a promising method for future successful earthquake prediction, without proper and thorough knowledge about the geological setting, atmospheric factors and geodynamics of the earthquake-prone regions we will not be able to identify anomalies due to seismic activity in the earth's crust. Active faulting is a key factor in identification of the

  1. Pulverization provides a mechanism for the nucleation of earthquakes at low stress on strong faults

    USGS Publications Warehouse

    Felzer, Karen R.

    2014-01-01

    An earthquake occurs when rock that has been deformed under stress rebounds elastically along a fault plane (Gilbert, 1884; Reid, 1911), radiating seismic waves through the surrounding earth. Rupture along the entire fault surface does not spontaneously occur at the same time, however. Rather the rupture starts in one tiny area, the rupture nucleation zone, and spreads sequentially along the fault. Like a row of dominoes, one bit of rebounding fault triggers the next. This triggering is understood to occur because of the large dynamic stresses at the tip of an active seismic rupture. The importance of these crack tip stresses is a central question in earthquake physics. The crack tip stresses are minimally important, for example, in the time predictable earthquake model (Shimazaki and Nakata, 1980), which holds that prior to rupture stresses are comparable to fault strength in many locations on the future rupture plane, with bits of variation. The stress/strength ratio is highest at some point, which is where the earthquake nucleates. This model does not require any special conditions or processes at the nucleation site; the whole fault is essentially ready for rupture at the same time. The fault tip stresses ensure that the rupture occurs as a single rapid earthquake, but the fact that fault tip stresses are high is not particularly relevant since the stress at most points does not need to be raised by much. Under this model it should technically be possible to forecast earthquakes based on the stress-renewaql concept, or estimates of when the fault as a whole will reach the critical stress level, a practice used in official hazard mapping (Field, 2008). This model also indicates that physical precursors may be present and detectable, since stresses are unusually high over a significant area before a large earthquake.

  2. Frictional heterogeneities on carbonate-bearing normal faults: Insights from the Monte Maggio Fault, Italy

    NASA Astrophysics Data System (ADS)

    Carpenter, B. M.; Scuderi, M. M.; Collettini, C.; Marone, C.

    2014-12-01

    Observations of heterogeneous and complex fault slip are often attributed to the complexity of fault structure and/or spatial heterogeneity of fault frictional behavior. Such complex slip patterns have been observed for earthquakes on normal faults throughout central Italy, where many of the Mw 6 to 7 earthquakes in the Apennines nucleate at depths where the lithology is dominated by carbonate rocks. To explore the relationship between fault structure and heterogeneous frictional properties, we studied the exhumed Monte Maggio Fault, located in the northern Apennines. We collected intact specimens of the fault zone, including the principal slip surface and hanging wall cataclasite, and performed experiments at a normal stress of 10 MPa under saturated conditions. Experiments designed to reactivate slip between the cemented principal slip surface and cataclasite show a 3 MPa stress drop as the fault surface fails, then velocity-neutral frictional behavior and significant frictional healing. Overall, our results suggest that (1) earthquakes may readily nucleate in areas of the fault where the slip surface separates massive limestone and are likely to propagate in areas where fault gouge is in contact with the slip surface; (2) postseismic slip is more likely to occur in areas of the fault where gouge is present; and (3) high rates of frictional healing and low creep relaxation observed between solid fault surfaces could lead to significant aftershocks in areas of low stress drop.

  3. NASA's EOSDIS Cumulus: Ingesting, Archiving, Managing, and Distributing Earth Science Data from the Commercial Cloud

    NASA Technical Reports Server (NTRS)

    Baynes, Katie; Ramachandran, Rahul; Pilone, Dan; Quinn, Patrick; Gilman, Jason; Schuler, Ian; Jazayeri, Alireza

    2017-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk will outline the motivation for this work, present the achievements and hurdles of the past 18 months and will chart a course for the future expansion of the Cumulus expansion. We will explore on not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud and how we are rising to meet those challenges through open collaboration and intentional stakeholder engagement.

  4. Late Quaternary faulting along the Death Valley-Furnace Creek fault system, California and Nevada

    USGS Publications Warehouse

    Brogan, George E.; Kellogg, Karl; Slemmons, D. Burton; Terhune, Christina L.

    1991-01-01

    The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest-trending pull-apart basin. The largest late Quaternary scarps along the Furnace Creek fault zone, with vertical separation of late Pleistocene surfaces of as much as 64 m (meters), are in Fish Lake Valley. Despite the predominance of normal faulting along the Death Valley fault zone, vertical offset of late Pleistocene surfaces along the Death Valley fault zone apparently does not exceed about 15 m. Evidence for four to six separate late Holocene faulting events along the Furnace Creek fault zone and three or more late Holocene events along the Death Valley fault zone are indicated by rupturing of Q1B (about 200-2,000 years old) geomorphic surfaces. Probably the youngest neotectonic feature observed along the Death Valley-Furnace Creek fault system, possibly historic in age, is vegetation lineaments in southernmost Fish Lake Valley. Near-historic faulting in Death Valley, within several kilometers south of Furnace Creek Ranch, is represented by (1) a 2,000-year-old lake shoreline that is cut by sinuous scarps, and (2) a system of young scarps with free-faceted faces (representing several faulting

  5. Audio-frequency magnetotelluric imaging of the Hijima fault, Yamasaki fault system, southwest Japan

    NASA Astrophysics Data System (ADS)

    Yamaguchi, S.; Ogawa, Y.; Fuji-Ta, K.; Ujihara, N.; Inokuchi, H.; Oshiman, N.

    2010-04-01

    An audio-frequency magnetotelluric (AMT) survey was undertaken at ten sites along a transect across the Hijima fault, a major segment of the Yamasaki fault system, Japan. The data were subjected to dimensionality analysis, following which two-dimensional inversions for the TE and TM modes were carried out. This model is characterized by (1) a clear resistivity boundary that coincides with the downward projection of the surface trace of the Hijima fault, (2) a resistive zone (>500 Ω m) that corresponds to Mesozoic sediment, and (3) shallow and deep two highly conductive zones (30-40 Ω m) along the fault. The shallow conductive zone is a common feature of the Yamasaki fault system, whereas the deep conductor is a newly discovered feature at depths of 800-1,800 m to the southwest of the fault. The conductor is truncated by the Hijima fault to the northeast, and its upper boundary is the resistive zone. Both conductors are interpreted to represent a combination of clay minerals and a fluid network within a fault-related fracture zone. In terms of the development of the fluid networks, the fault core of the Hijima fault and the highly resistive zone may play important roles as barriers to fluid flow on the northeast and upper sides of the conductive zones, respectively.

  6. Microstructures imply cataclasis and authigenic mineral formation control geomechanical properties of New Zealand's Alpine Fault

    NASA Astrophysics Data System (ADS)

    Schuck, B.; Janssen, C.; Schleicher, A. M.; Toy, V. G.; Dresen, G.

    2018-05-01

    The Alpine Fault is capable of generating large (MW > 8) earthquakes and is the main geohazard on South Island, NZ, and late in its 250-291-year seismic cycle. To minimize its hazard potential, it is indispensable to identify and understand the processes influencing the geomechanical behavior and strength-evolution of the fault. High-resolution microstructural, mineralogical and geochemical analyses of the Alpine Fault's core demonstrate wall rock fragmentation, assisted by mineral dissolution, and cementation resulting in the formation of a fine-grained principal slip zone (PSZ). A complex network of anastomosing and mutually cross-cutting calcite veins implies that faulting occurred during episodes of dilation, slip and sealing. Fluid-assisted dilatancy leads to a significant volume increase accommodated by vein formation in the fault core. Undeformed euhedral chlorite crystals and calcite veins that have cut footwall gravels demonstrate that these processes occurred very close to the Earth's surface. Microstructural evidence indicates that cataclastic processes dominate the deformation and we suggest that powder lubrication and grain rolling, particularly influenced by abundant nanoparticles, play a key role in the fault core's velocity-weakening behavior rather than frictional sliding. This is further supported by the absence of smectite, which is reasonable given recently measured geothermal gradients of more than 120 °C km-1 and the impermeable nature of the PSZ, which both limit the growth of this phase and restrict its stability to shallow depths. Our observations demonstrate that high-temperature fluids can influence authigenic mineral formation and thus control the fault's geomechanical behavior and the cyclic evolution of its strength.

  7. Finite element models of earthquake cycles in mature strike-slip fault zones

    NASA Astrophysics Data System (ADS)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a

  8. Quantifying Vertical Exhumation in Intracontinental Strike-Slip Faults: the Garlock fault zone, southern California

    NASA Astrophysics Data System (ADS)

    Chinn, L.; Blythe, A. E.; Fendick, A.

    2012-12-01

    New apatite fission-track ages show varying rates of vertical exhumation at the eastern terminus of the Garlock fault zone. The Garlock fault zone is a 260 km long east-northeast striking strike-slip fault with as much as 64 km of sinistral offset. The Garlock fault zone terminates in the east in the Avawatz Mountains, at the intersection with the dextral Southern Death Valley fault zone. Although motion along the Garlock fault west of the Avawatz Mountains is considered purely strike-slip, uplift and exhumation of bedrock in the Avawatz Mountains south of the Garlock fault, as recently as 5 Ma, indicates that transpression plays an important role at this location and is perhaps related to a restricting bend as the fault wraps around and terminates southeastward along the Avawatz Mountains. In this study we complement extant thermochronometric ages from within the Avawatz core with new low temperature fission-track ages from samples collected within the adjacent Garlock and Southern Death Valley fault zones. These thermochronometric data indicate that vertical exhumation rates vary within the fault zone. Two Miocene ages (10.2 (+5.0/-3.4) Ma, 9.0 (+2.2/-1.8) Ma) indicate at least ~3.3 km of vertical exhumation at ~0.35 mm/yr, assuming a 30°C/km geothermal gradient, along a 2 km transect parallel and adjacent to the Mule Spring fault. An older Eocene age (42.9 (+8.7/-7.3) Ma) indicates ~3.3 km of vertical exhumation at ~0.08 mm/yr. These results are consistent with published exhumation rates of 0.35 mm/yr between ~7 and ~4 Ma and 0.13 mm/yr between ~15 and ~9 Ma, as determined by apatite fission-track and U-Th/He thermochronometry in the hanging-wall of the Mule Spring fault. Similar exhumation rates on both sides of the Mule Spring fault support three separate models: 1) Thrusting is no longer active along the Mule Spring fault, 2) Faulting is dominantly strike-slip at the sample locations, or 3) Miocene-present uplift and exhumation is below detection levels

  9. Aftershocks of the 2014 South Napa, California, Earthquake: Complex faulting on secondary faults

    USGS Publications Warehouse

    Hardebeck, Jeanne L.; Shelly, David R.

    2016-01-01

    We investigate the aftershock sequence of the 2014 MW6.0 South Napa, California, earthquake. Low-magnitude aftershocks missing from the network catalog are detected by applying a matched-filter approach to continuous seismic data, with the catalog earthquakes serving as the waveform templates. We measure precise differential arrival times between events, which we use for double-difference event relocation in a 3D seismic velocity model. Most aftershocks are deeper than the mainshock slip, and most occur west of the mapped surface rupture. While the mainshock coseismic and postseismic slip appears to have occurred on the near-vertical, strike-slip West Napa fault, many of the aftershocks occur in a complex zone of secondary faulting. Earthquake locations in the main aftershock zone, near the mainshock hypocenter, delineate multiple dipping secondary faults. Composite focal mechanisms indicate strike-slip and oblique-reverse faulting on the secondary features. The secondary faults were moved towards failure by Coulomb stress changes from the mainshock slip. Clusters of aftershocks north and south of the main aftershock zone exhibit vertical strike-slip faulting more consistent with the West Napa Fault. The northern aftershocks correspond to the area of largest mainshock coseismic slip, while the main aftershock zone is adjacent to the fault area that has primarily slipped postseismically. Unlike most creeping faults, the zone of postseismic slip does not appear to contain embedded stick-slip patches that would have produced on-fault aftershocks. The lack of stick-slip patches along this portion of the fault may contribute to the low productivity of the South Napa aftershock sequence.

  10. Fault linkage and continental breakup

    NASA Astrophysics Data System (ADS)

    Cresswell, Derren; Lymer, Gaël; Reston, Tim; Stevenson, Carl; Bull, Jonathan; Sawyer, Dale; Morgan, Julia

    2017-04-01

    The magma-poor rifted margin off the west coast of Galicia (NW Spain) has provided some of the key observations in the development of models describing the final stages of rifting and continental breakup. In 2013, we collected a 68 x 20 km 3D seismic survey across the Galicia margin, NE Atlantic. Processing through to 3D Pre-stack Time Migration (12.5 m bin-size) and 3D depth conversion reveals the key structures, including an underlying detachment fault (the S detachment), and the intra-block and inter-block faults. These data reveal multiple phases of faulting, which overlap spatially and temporally, have thinned the crust to between zero and a few km thickness, producing 'basement windows' where crustal basement has been completely pulled apart and sediments lie directly on the mantle. Two approximately N-S trending fault systems are observed: 1) a margin proximal system of two linked faults that are the upward extension (breakaway faults) of the S; in the south they form one surface that splays northward to form two faults with an intervening fault block. These faults were thus demonstrably active at one time rather than sequentially. 2) An oceanward relay structure that shows clear along strike linkage. Faults within the relay trend NE-SW and heavily dissect the basement. The main block bounding faults can be traced from the S detachment through the basement into, and heavily deforming, the syn-rift sediments where they die out, suggesting that the faults propagated up from the S detachment surface. Analysis of the fault heaves and associated maps at different structural levels show complementary fault systems. The pattern of faulting suggests a variation in main tectonic transport direction moving oceanward. This might be interpreted as a temporal change during sequential faulting, however the transfer of extension between faults and the lateral variability of fault blocks suggests that many of the faults across the 3D volume were active at least in part

  11. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  12. Probabilistic fault tree analysis of a radiation treatment system.

    PubMed

    Ekaette, Edidiong; Lee, Robert C; Cooke, David L; Iftody, Sandra; Craighead, Peter

    2007-12-01

    Inappropriate administration of radiation for cancer treatment can result in severe consequences such as premature death or appreciably impaired quality of life. There has been little study of vulnerable treatment process components and their contribution to the risk of radiation treatment (RT). In this article, we describe the application of probabilistic fault tree methods to assess the probability of radiation misadministration to patients at a large cancer treatment center. We conducted a systematic analysis of the RT process that identified four process domains: Assessment, Preparation, Treatment, and Follow-up. For the Preparation domain, we analyzed possible incident scenarios via fault trees. For each task, we also identified existing quality control measures. To populate the fault trees we used subjective probabilities from experts and compared results with incident report data. Both the fault tree and the incident report analysis revealed simulation tasks to be most prone to incidents, and the treatment prescription task to be least prone to incidents. The probability of a Preparation domain incident was estimated to be in the range of 0.1-0.7% based on incident reports, which is comparable to the mean value of 0.4% from the fault tree analysis using probabilities from the expert elicitation exercise. In conclusion, an analysis of part of the RT system using a fault tree populated with subjective probabilities from experts was useful in identifying vulnerable components of the system, and provided quantitative data for risk management.

  13. Off-fault tip splay networks: a genetic and generic property of faults indicative of their long-term propagation, and a major component of off-fault damage

    NASA Astrophysics Data System (ADS)

    Perrin, C.; Manighetti, I.; Gaudemer, Y.

    2015-12-01

    Faults grow over the long-term by accumulating displacement and lengthening, i.e., propagating laterally. We use fault maps and fault propagation evidences available in literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimeters to thousands of kilometers and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of parent fault length, slip mode, context, etc, tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (~30 and ~10 % of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). Tip splays more commonly develop on one side only of the parent fault. We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. We suggest that they represent the most recent damage off-the parent fault, formed during the most recent phase of fault lengthening. The scaling relation between parent fault length and width of tip splay network implies that damage zones enlarge as parent fault length increases. Elastic properties of host rocks might thus be modified at large distances away from a fault, up to 10% of its length. During an earthquake, a significant fraction of coseismic slip and stress is dissipated into the permanent damage zone that surrounds the causative fault. We infer that coseismic dissipation might occur away from a rupture zone as far as a distance of 10% of the length of its causative fault. Coseismic deformations and stress transfers might thus be significant in broad regions about principal rupture traces. This work has been published in Comptes Rendus Geoscience under doi:10.1016/j.crte.2015.05.002 (http://www.sciencedirect.com/science/article/pii/S1631071315000528).

  14. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    PubMed

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. A New Perspective on Fault Geometry and Slip Distribution of the 2009 Dachaidan Mw 6.3 Earthquake from InSAR Observations.

    PubMed

    Liu, Yang; Xu, Caijun; Wen, Yangmao; Fok, Hok Sum

    2015-07-10

    On 28 August 2009, the northern margin of the Qaidam basin in the Tibet Plateau was ruptured by an Mw 6.3 earthquake. This study utilizes the Envisat ASAR images from descending Track 319 and ascending Track 455 for capturing the coseismic deformation resulting from this event, indicating that the earthquake fault rupture does not reach to the earth's surface. We then propose a four-segmented fault model to investigate the coseismic deformation by determining the fault parameters, followed by inverting slip distribution. The preferred fault model shows that the rupture depths for all four fault planes mainly range from 2.0 km to 7.5 km, comparatively shallower than previous results up to ~13 km, and that the slip distribution on the fault plane is complex, exhibiting three slip peaks with a maximum of 2.44 m at a depth between 4.1 km and 4.9 km. The inverted geodetic moment is 3.85 × 10(18) Nm (Mw 6.36). The 2009 event may rupture from the northwest to the southeast unilaterally, reaching the maximum at the central segment.

  16. Nonlinear waves in earth crust faults: application to regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Gershenzon, Naum; Bambakidis, Gust

    2015-04-01

    The genesis, development and cessation of regular earthquakes continue to be major problems of modern geophysics. How are earthquakes initiated? What factors determine the rapture velocity, slip velocity, rise time and geometry of rupture? How do accumulated stresses relax after the main shock? These and other questions still need to be answered. In addition, slow slip events have attracted much attention as an additional source for monitoring fault dynamics. Recently discovered phenomena such as deep non-volcanic tremor (NVT), low frequency earthquakes (LFE), very low frequency earthquakes (VLF), and episodic tremor and slip (ETS) have enhanced and complemented our knowledge of fault dynamic. At the same time, these phenomena give rise to new questions about their genesis, properties and relation to regular earthquakes. We have developed a model of macroscopic dry friction which efficiently describes laboratory frictional experiments [1], basic properties of regular earthquakes including post-seismic stress relaxation [3], the occurrence of ambient and triggered NVT [4], and ETS events [5, 6]. Here we will discuss the basics of the model and its geophysical applications. References [1] Gershenzon N.I. & G. Bambakidis (2013) Tribology International, 61, 11-18, http://dx.doi.org/10.1016/j.triboint.2012.11.025 [2] Gershenzon, N.I., G. Bambakidis and T. Skinner (2014) Lubricants 2014, 2, 1-x manuscripts; doi:10.3390/lubricants20x000x; arXiv:1411.1030v2 [3] Gershenzon N.I., Bykov V. G. and Bambakidis G., (2009) Physical Review E 79, 056601 [4] Gershenzon, N. I, G. Bambakidis, (2014a), Bull. Seismol. Soc. Am., 104, 4, doi: 10.1785/0120130234 [5] Gershenzon, N. I.,G. Bambakidis, E. Hauser, A. Ghosh, and K. C. Creager (2011), Geophys. Res. Lett., 38, L01309, doi:10.1029/2010GL045225. [6] Gershenzon, N.I. and G. Bambakidis (2014) Bull. Seismol. Soc. Am., (in press); arXiv:1411.1020

  17. A general law of fault wear and its implication to gouge zone evolution

    NASA Astrophysics Data System (ADS)

    Boneh, Yuval; Reches, Ze'ev

    2017-04-01

    -velocity, steady-state sliding. Earth and Planetary Science Letters 381, 127-137. Boneh, Y., Chang, J.C., Lockner, D.A., Reches, Z., 2014. Evolution of Wear and Friction Along Experimental Faults. Pure and Applied Geophysics, 1-17.

  18. On Identifiability of Bias-Type Actuator-Sensor Faults in Multiple-Model-Based Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    This paper explores a class of multiple-model-based fault detection and identification (FDI) methods for bias-type faults in actuators and sensors. These methods employ banks of Kalman-Bucy filters to detect the faults, determine the fault pattern, and estimate the fault values, wherein each Kalman-Bucy filter is tuned to a different failure pattern. Necessary and sufficient conditions are presented for identifiability of actuator faults, sensor faults, and simultaneous actuator and sensor faults. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have biases.

  19. Runtime Verification in Context : Can Optimizing Error Detection Improve Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Dwyer, Matthew B.; Purandare, Rahul; Person, Suzette

    2010-01-01

    Runtime verification has primarily been developed and evaluated as a means of enriching the software testing process. While many researchers have pointed to its potential applicability in online approaches to software fault tolerance, there has been a dearth of work exploring the details of how that might be accomplished. In this paper, we describe how a component-oriented approach to software health management exposes the connections between program execution, error detection, fault diagnosis, and recovery. We identify both research challenges and opportunities in exploiting those connections. Specifically, we describe how recent approaches to reducing the overhead of runtime monitoring aimed at error detection might be adapted to reduce the overhead and improve the effectiveness of fault diagnosis.

  20. Fault Injection Campaign for a Fault Tolerant Duplex Framework

    NASA Technical Reports Server (NTRS)

    Sacco, Gian Franco; Ferraro, Robert D.; von llmen, Paul; Rennels, Dave A.

    2007-01-01

    Fault tolerance is an efficient approach adopted to avoid or reduce the damage of a system failure. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1, 2] that uses a fault tolerant approach and allows to run two replicas of the same process on two different nodes of a commercial off-the-shelf (COTS) computer cluster. A third process running on a different node, constantly monitors the results computed by the two replicas, and eventually restarts the two replica processes if an inconsistency in their computation is detected. This approach is very cost efficient and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.

  1. The Denali EarthScope Education Partnership: Creating Opportunities for Learning About Solid Earth Processes in Alaska and Beyond.

    NASA Astrophysics Data System (ADS)

    Roush, J. J.; Hansen, R. A.

    2003-12-01

    The Geophysical Institute of the University of Alaska Fairbanks, in partnership with Denali National Park and Preserve, has begun an education outreach program that will create learning opportunities in solid earth geophysics for a wide sector of the public. We will capitalize upon a unique coincidence of heightened public interest in earthquakes (due to the M 7.9 Denali Fault event of Nov. 3rd, 2002), the startup of the EarthScope experiment, and the construction of the Denali Science & Learning Center, a premiere facility for science education located just 43 miles from the epicenter of the Denali Fault earthquake. Real-time data and current research results from EarthScope installations and science projects in Alaska will be used to engage students and teachers, national park visitors, and the general public in a discovery process that will enhance public understanding of tectonics, seismicity and volcanism along the boundary between the Pacific and North American plates. Activities will take place in five program areas, which are: 1) museum displays and exhibits, 2) outreach via print publications and electronic media, 3) curriculum development to enhance K-12 earth science education, 4) teacher training to develop earth science expertise among K-12 educators, and 5) interaction between scientists and the public. In order to engage the over 1 million annual visitors to Denali, as well as people throughout Alaska, project activities will correspond with the opening of the Denali Science and Learning Center in 2004. An electronic interactive kiosk is being constructed to provide public access to real-time data from seismic and geodetic monitoring networks in Alaska, as well as cutting edge visualizations of solid earth processes. A series of print publications and a website providing access to real-time seismic and geodetic data will be developed for park visitors and the general public, highlighting EarthScope science in Alaska. A suite of curriculum modules

  2. Evidences of Shear Deformations and Faulting on Comet 67P/ Churyumov-Gerasimenko: a Driving Force for the Mechanical Erosion of the Nucleus?

    NASA Astrophysics Data System (ADS)

    Matonti, C.; Auger, A. T.; Groussin, O.; Jorda, L.; Attree, N.; Viseur, S.; El Maarry, M. R.

    2016-12-01

    Fractures and faults are widespread and pervasive in Earth crustal and sedimentary rocks. They result from deviatoric stresses applied on brittle materials. In various contexts, their geometry often allows one to infer the direction and sometimes the magnitude of the stress that led to their formation. The Rosetta spacecraft has orbited comet 67P for two years and has acquired images of the nucleus surface with an unprecedented spatial resolution, down to 20 cm/px. These data open the way for entirely new geological interpretations of the structures observed at the surface of cometary nuclei. In this work, we focus on the structural interpretations of the meter to hectometer scale lineaments observed on the surface from the OSIRIS-NAC images. To improve interpretations, we performed the digitalization of lineaments in selected zones. In brittle material regions (essentially Atum and Khonsu), we observed structures that nicely match fault splay, duplexes blocks and anastomosing or "en-échelon" patterns. Such structures strongly suggest the occurrence of sheared zones and "strike-slip fault" arrays, which are observed here for the first time at the surface of a comet nucleus. Despite the large differences in the gravity magnitude and nucleus material strength compared to Earth, the observation of such structures seems to confirm comparable gravity to strength ratio between 67P and the Earth (Groussin et al., 2015). Most of these shear structures are sub-parallel and located inside or near the nucleus neck regions (Hapi, Sobek and Wosret), which is consistent with an increased relative shear stress at the boundary of the two lobes (Hirabayashi et al., 2016). These results emphasize mechanisms that may have important implications on the nucleus strength estimation and how it is eroded. Indeed, considering the fault propagation laws along with multiple angles views of structures, the observed faults likely propagate inside the nucleus over several tenths to hundreds of

  3. Complex Paleotopography and Faulting near the Elsinore Fault, Coyote Mountains, southern California

    NASA Astrophysics Data System (ADS)

    Brenneman, M. J.; Bykerk-Kauffman, A.

    2012-12-01

    The Coyote Mountains of southern California are bounded on the southwest by the Elsinore Fault, an active dextral fault within the San Andreas Fault zone. According to Axen and Fletcher (1998) and Dorsey and others (2011), rocks exposed in these mountains comprise a portion of the hanging wall of the east-vergent Salton Detachment Fault, which was active from the late Miocene-early Pliocene to Ca. 1.1-1.3 Ma. Detachment faulting was accompanied by subsidence, resulting in deposition of a thick sequence of marine and nonmarine sedimentary rocks. Regional detachment faulting and subsidence ceased with the inception of the Elsinore Fault, which has induced uplift of the Coyote Mountains. Detailed geologic mapping in the central Coyote Mountains supports the above interpretation and adds some intriguing details. New discoveries include a buttress unconformity at the base of the Miocene/Pliocene section that locally cuts across strata at an angle so high that it could be misinterpreted as a fault. We thus conclude that the syn-extension strata were deposited on a surface with very rugged topography. We also discovered that locally-derived nonmarine gravel deposits exposed near the crest of the range, previously interpreted as part of the Miocene Split Mountain Group by Winker and Kidwell (1996), unconformably overlie units of the marine Miocene/Pliocene Imperial Group and must therefore be Pliocene or younger. The presence of such young gravel deposits on the crest of the range provides evidence for its rapid uplift. Additional new discoveries flesh out details of the structural history of the range. We mapped just two normal faults, both of which were relatively minor, thus supporting Axen and Fletcher's assertion that the hanging wall block of the Salton Detachment Fault had not undergone significant internal deformation during extension. We found abundant complex synthetic and antithetic strike-slip faults throughout the area, some of which offset Quaternary alluvial

  4. Fault Interaction and Stress Accumulation in Chaman Fault System, Balouchistan, Pakistan, Since 1892

    NASA Astrophysics Data System (ADS)

    Riaz, M. S.; Shan, B.; Xiong, X.; Xie, Z.

    2017-12-01

    The curved-shaped left-lateral Chaman fault is the Western boundary of the Indian plate, which is approximately 1000 km long. The Chaman fault is an active fault and also locus of many catastrophic earthquakes. Since the inception of strike-slip movement at 20-25Ma along the western collision boundary between Indian and Eurasian plates, the average geologically constrained slip rate of 24 to 35 mm/yr accounts for a total displacement of 460±10 km along the Chaman fault system (Beun et al., 1979; Lawrence et al., 1992). Based on earthquake triggering theory, the change in Coulomb Failure Stress (DCFS) either halted (shadow stress) or advances (positive stress) the occurrence of subsequent earthquakes. Several major earthquakes occurred in Chaman fault system, and this region is poorly studied to understand the earthquake/fault interaction and hazard assessment. In order to do so, we have analyzed the earthquakes catalog and collected significant earthquakes with M ≥6.2 since 1892. We then investigate the evolution of DCFS in the Chaman fault system is computed by integration of coseismic static and postseismic viscoelastic relaxation stress transfer since the 1892, using the codePSGRN/PSCMP (Wang et al., 2006). Moreover, for postseismic stress transfer simulation, we adopted linear Maxwell rheology to calculate the viscoelastic effects in this study. Our results elucidate that three out of four earthquakes are triggered by the preceding earthquakes. The 1892-earthquake with magnitude Mw6.8, which occurred on the North segment of Chaman fault has not influence the 1935-earthquake which occurred on Ghazaband fault, a parallel fault 20km east to Chaman fault. The 1935-earthquake with magnitude Mw7.7 significantly loaded the both ends of rupture with positive stress (CFS ≥0.01 Mpa), which later on triggered the 1975-earthquake with 23% of its rupture length where CFS ≥0.01 Mpa, on Chaman fault, and 1990-earthquke with 58% of its rupture length where CFS ≥0

  5. Fluid involvement in normal faulting

    NASA Astrophysics Data System (ADS)

    Sibson, Richard H.

    2000-04-01

    Evidence of fluid interaction with normal faults comes from their varied role as flow barriers or conduits in hydrocarbon basins and as hosting structures for hydrothermal mineralisation, and from fault-rock assemblages in exhumed footwalls of steep active normal faults and metamorphic core complexes. These last suggest involvement of predominantly aqueous fluids over a broad depth range, with implications for fault shear resistance and the mechanics of normal fault reactivation. A general downwards progression in fault rock assemblages (high-level breccia-gouge (often clay-rich) → cataclasites → phyllonites → mylonite → mylonitic gneiss with the onset of greenschist phyllonites occurring near the base of the seismogenic crust) is inferred for normal fault zones developed in quartzo-feldspathic continental crust. Fluid inclusion studies in hydrothermal veining from some footwall assemblages suggest a transition from hydrostatic to suprahydrostatic fluid pressures over the depth range 3-5 km, with some evidence for near-lithostatic to hydrostatic pressure cycling towards the base of the seismogenic zone in the phyllonitic assemblages. Development of fault-fracture meshes through mixed-mode brittle failure in rock-masses with strong competence layering is promoted by low effective stress in the absence of thoroughgoing cohesionless faults that are favourably oriented for reactivation. Meshes may develop around normal faults in the near-surface under hydrostatic fluid pressures to depths determined by rock tensile strength, and at greater depths in overpressured portions of normal fault zones and at stress heterogeneities, especially dilational jogs. Overpressures localised within developing normal fault zones also determine the extent to which they may reutilise existing discontinuities (for example, low-angle thrust faults). Brittle failure mode plots demonstrate that reactivation of existing low-angle faults under vertical σ1 trajectories is only likely if

  6. Mobile laser scanning applied to the earth sciences

    USGS Publications Warehouse

    Brooks, Benjamin A.; Glennie, Craig; Hudnut, Kenneth W.; Ericksen, Todd; Hauser, Darren

    2013-01-01

    Lidar (light detection and ranging), a method by which the precise time of flight of emitted pulses of laser energy is measured and converted to distance for reflective targets, has helped scientists make topographic maps of Earth's surface at scales as fine as centimeters. These maps have allowed the discovery and analysis of myriad otherwise unstudied features, such as fault scarps, river channels, and even ancient ruins [Glennie et al., 2013b].

  7. Fault zone structure from topography: signatures of en echelon fault slip at Mustang Ridge on the San Andreas Fault, Monterey County, California

    USGS Publications Warehouse

    DeLong, Stephen B.; Hilley, George E.; Rymer, Michael J.; Prentice, Carol

    2010-01-01

    We used high-resolution topography to quantify the spatial distribution of scarps, linear valleys, topographic sinks, and oversteepened stream channels formed along an extensional step over on the San Andreas Fault (SAF) at Mustang Ridge, California. This location provides detail of both creeping fault landform development and complex fault zone kinematics. Here, the SAF creeps 10–14 mm/yr slower than at locations ∼20 km along the fault in either direction. This spatial change in creep rate is coincident with a series of en echelon oblique-normal faults that strike obliquely to the SAF and may accommodate the missing deformation. This study presents a suite of analyses that are helpful for proper mapping of faults in locations where high-resolution topographic data are available. Furthermore, our analyses indicate that two large subsidiary faults near the center of the step over zone appear to carry significant distributed deformation based on their large apparent vertical offsets, the presence of associated sag ponds and fluvial knickpoints, and the observation that they are rotating a segment of the main SAF. Several subsidiary faults in the southeastern portion of Mustang Ridge are likely less active; they have few associated sag ponds and have older scarp morphologic ages and subdued channel knickpoints. Several faults in the northwestern part of Mustang Ridge, though relatively small, are likely also actively accommodating active fault slip based on their young morphologic ages and the presence of associated sag ponds.

  8. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  9. Global rates of mantle serpentinization and H2 release at oceanic transform faults

    NASA Astrophysics Data System (ADS)

    Ruepke, Lars; Hasenclever, Joerg

    2017-04-01

    The cycling of seawater through the ocean floor is the dominant mechanism of biogeochemical exchange between the solid earth and the global ocean. Crustal fluid flow appears to be typically associated with major seafloor structures, and oceanic transform faults (OTF) are one of the most striking yet poorly understood features of the global mid-ocean ridge systems. Fracture zones and transform faults have long been hypothesized to be sites of substantial biogeochemical exchange between the solid Earth and the global ocean. This is particularly interesting with regard to the ocean biome. Deep ocean ecosystems constitute 60% of it but their role in global ocean biogeochemical cycles is much overlooked. There is growing evidence that life is supported by chemosynthesis at hydrothermal vents but also in the crust, and therefore this may be a more abundant process than previously thought. In this context, the serpentine forming interaction between seawater and cold lithospheric mantle rocks is particularly interesting as it is also a mechanism of abiotic hydrogen and methane formation. Interestingly, a quantitative global assessment of mantle serpentinization at oceanic transform faults in the context of the biogeochemical exchange between the seafloor and the global ocean is still largely missing. Here we present the results of a set of 3-D thermo-mechanical model calculations that investigate mantle serpentinization at OTFs for the entire range of globally observed slip rates and fault lengths. These visco-plastic models predict the OTF thermal structure and the location of crustal-scale brittle deformation, which is a prerequisite for mantle serpentinization to occur. The results of these simulations are integrated with information on the global distribution of OTF lengths and slip rates yielding global estimates on mantle serpentinization and associated H2 release. We find that OTFs are potentially sites of intense crustal fluid flow and are in terms of H2 release

  10. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    PubMed

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Three-dimensional cellular automata as a model of a seismic fault

    NASA Astrophysics Data System (ADS)

    Gálvez, G.; Muñoz, A.

    2017-01-01

    The Earth's crust is broken into a series of plates, whose borders are the seismic fault lines and it is where most of the earthquakes occur. This plating system can in principle be described by a set of nonlinear coupled equations describing the motion of the plates, its stresses, strains and other characteristics. Such a system of equations is very difficult to solve, and nonlinear parts leads to a chaotic behavior, which is not predictable. In 1989, Bak and Tang presented an earthquake model based on the sand pile cellular automata. The model though simple, provides similar results to those observed in actual earthquakes. In this work the cellular automata in three dimensions is proposed as a best model to approximate a seismic fault. It is noted that the three-dimensional model reproduces similar properties to those observed in real seismicity, especially, the Gutenberg-Richter law.

  12. The SCEC 3D Community Fault Model (CFM-v5): An updated and expanded fault set of oblique crustal deformation and complex fault interaction for southern California

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Plesch, A.; Sorlien, C. C.; Shaw, J. H.; Hauksson, E.

    2014-12-01

    Southern California represents an ideal natural laboratory to investigate oblique deformation in 3D owing to its comprehensive datasets, complex tectonic history, evolving components of oblique slip, and continued crustal rotations about horizontal and vertical axes. As the SCEC Community Fault Model (CFM) aims to accurately reflect this 3D deformation, we present the results of an extensive update to the model by using primarily detailed fault trace, seismic reflection, relocated hypocenter and focal mechanism nodal plane data to generate improved, more realistic digital 3D fault surfaces. The results document a wide variety of oblique strain accommodation, including various aspects of strain partitioning and fault-related folding, sets of both high-angle and low-angle faults that mutually interact, significant non-planar, multi-stranded faults with variable dip along strike and with depth, and active mid-crustal detachments. In places, closely-spaced fault strands or fault systems can remain surprisingly subparallel to seismogenic depths, while in other areas, major strike-slip to oblique-slip faults can merge, such as the S-dipping Arroyo Parida-Mission Ridge and Santa Ynez faults with the N-dipping North Channel-Pitas Point-Red Mountain fault system, or diverge with depth. Examples of the latter include the steep-to-west-dipping Laguna Salada-Indiviso faults with the steep-to-east-dipping Sierra Cucapah faults, and the steep southern San Andreas fault with the adjacent NE-dipping Mecca Hills-Hidden Springs fault system. In addition, overprinting by steep predominantly strike-slip faulting can segment which parts of intersecting inherited low-angle faults are reactivated, or result in mutual cross-cutting relationships. The updated CFM 3D fault surfaces thus help characterize a more complex pattern of fault interactions at depth between various fault sets and linked fault systems, and a more complex fault geometry than typically inferred or expected from

  13. Nonlinear softening of unconsolidated granular earth materials

    NASA Astrophysics Data System (ADS)

    Lieou, Charles K. C.; Daub, Eric G.; Guyer, Robert A.; Johnson, Paul A.

    2017-09-01

    Unconsolidated granular earth materials exhibit softening behavior due to external perturbations such as seismic waves, namely, the wave speed and elastic modulus decrease upon increasing the strain amplitude above dynamics strains of about 10-6 under near-surface conditions. In this letter, we describe a theoretical model for such behavior. The model is based on the idea that shear transformation zones—clusters of grains that are loose and susceptible to contact changes, particle displacement, and rearrangement—are responsible for plastic deformation and softening of the material. We apply the theory to experiments on simulated fault gouge composed of glass beads and demonstrate that the theory predicts nonlinear resonance shifts, reduction of the P wave modulus, and attenuation, in agreement with experiments. The theory thus offers insights on the nature of nonlinear elastic properties of a granular medium and potentially into phenomena such as triggering on earthquake faults.

  14. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    NASA Astrophysics Data System (ADS)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  15. Role of reservoir simulation in development and management of complexly-faulted, multiple-reservoir Dulang field, offshore Malaysia: Holistic strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonrexa, K.; Aziz, A.; Solomon, G.J.

    1995-10-01

    The Dulang field, discovered in 1981, is a major oil filed located offshore Malaysia in the Malay Basin. The Dulang Unit Area constitutes the central part of this exceedingly heterogeneous field. The Unit Area consists of 19 stacked shaly sandstone reservoirs which are divided into about 90 compartments with multiple fluid contacts owing to severe faulting. Current estimated put the Original-Oil-In-Place (OOIP) in the neighborhood of 700 million stock tank barrels (MMSTB). Production commenced in March 1991 and the current production is more than 50,000 barrels of oil per day (BOPD). In addition to other more conventional means, reservoir simulationmore » has been employed form the very start as a vital component of the overall strategy to develop and manage this challenging field. More than 10 modeling studies have been completed by Petronas Carigali Sdn. Bhd. (Carigali) at various times during the short life of this field thus far. To add to that, Esso Production Malaysia Inc. (EPMI) has simultaneously conducted a number of independent studies. These studies have dealt with undersaturated compartments as well as those with small and large gas caps. They have paved the way for improved reservoir characterization, optimum development planning and prudent production practices. This paper discusses the modeling approaches and highlights the crucial role these studies have played on an ongoing basis in the development and management of the complexly-faulted, multi-reservoir Dulang Unit Area.« less

  16. Influence of fault steps on rupture termination of strike-slip earthquake faults

    NASA Astrophysics Data System (ADS)

    Li, Zhengfang; Zhou, Bengang

    2018-03-01

    A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.

  17. Borehole Strainmeters and the monitoring of the North Anatolian Fault in the Marmara Sea.

    NASA Astrophysics Data System (ADS)

    Johnson, W.; Mencin, D.; Bilham, R. G.; Gottlieb, M. H.; Van Boskirk, E.; Hodgkinson, K. M.; Mattioli, G. S.; Acarel, D.; Bulut, F.; Bohnhoff, M.; Ergintav, S.; Bal, O.; Ozener, H.

    2016-12-01

    Twice in the past 1000 years a sequence of large earthquakes has propagated from east to west along the North Anatolian fault (NAF) in Turkey towards Istanbul, with the final earthquake in the sequence destroying the city. This occurred most recently in 1509. The population of greater Istanbul is 20 million and the next large earthquake of the current sequence is considered imminent. The most likely location for a major earthquake on the NAF is considered the Marmara-Sea/Princes-Island segment south and southeast of Istanbul [Bohnhoff et al., 2013]. Insights into the nucleation and future behavior of this segment of the NAF are anticipated from measuring deformation near the fault, and in particular possible aseismic slip processes on the fault that may precede as well as accompany any future rupture. Aseismic slip processes near the western end of the Izmit rupture, near where it passes offshore beneath the Sea of Marmara near Izmit, has been successfully monitored using InSAR, GPS, and creepmeters. A 1mm amplitude, 24h creep event was recorded by our creepmeter near Izmit in 2015. These instruments and methods are of limited utility in monitoring the submarine portion of the NAF Data from numerous borehole strainmeters (BSM) along the San Andreas Fault, including those that were installed and maintained as part of the EarthScope Plate Boundary Observatory (PBO), demonstrate that the characteristics of creep propagation events with sub-cm slip amplitudes can be quantified for slip events at 10 km source-to-sensor distances. Such distances are comparable to those between the mainland and the submarine NAF, with some islands allowing installations within 3 km of the fault. In a collaborative program (GeoGONAF) between the National Science Foundation, GeoForschungsZentrum, Turkish Disaster and Emergency Management Authority, and the Kandilli Observatory, we installed an array of six PBO type BSM systems, which include strainmeters and seismometers, around the eastern

  18. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  19. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  20. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  1. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault

  2. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  3. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    NASA Astrophysics Data System (ADS)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (<2 Ma). The initiation of these young faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  4. What major faults look like, and why this matters for lithospheric dynamics

    NASA Astrophysics Data System (ADS)

    Fagereng, Ake

    2016-04-01

    first event. The subduction thrust interface provides an example of fault evolution in underthrust sediments as they deform and dewater. At shallow levels, distributed shear leads to development of scaly cleavage, which in places provides weak, clay surfaces on which earthquakes can propagate to the sea floor. With further deformation, a melange is progressively developed, with increasingly dismembered, sheared lenses of higher viscosity sedimentary rock and slivers of oceanic crust, in a low viscosity, cleaved matrix. The range of examples presented here illustrate how long-term deformation results in weak structures that likely control future deformation. Yet, the rheology of these structures is modulated by strength fluctuations during the earthquake cycle, illustrated by common evidence of episodic fault healing. The take home message from these field studies of fault zones is therefore the heterogeneity of the Earth's crust, the importance of long-term weak zones as a first order control on crustal deformation, and short-term strength fluctuations within these zones as a consequence of, and reason for, the earthquake cycle.

  5. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  6. The engine fuel system fault analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Song, Hanqiang; Yang, Changsheng; Zhao, Wei

    2017-05-01

    For improving the reliability of the engine fuel system, the typical fault factor of the engine fuel system was analyzed from the point view of structure and functional. The fault character was gotten by building the fuel system fault tree. According the utilizing of fault mode effect analysis method (FMEA), several factors of key component fuel regulator was obtained, which include the fault mode, the fault cause, and the fault influences. All of this made foundation for next development of fault diagnosis system.

  7. Where's the Hayward Fault? A Green Guide to the Fault

    USGS Publications Warehouse

    Stoffer, Philip W.

    2008-01-01

    This report describes self-guided field trips to one of North America?s most dangerous earthquake faults?the Hayward Fault. Locations were chosen because of their easy access using mass transit and/or their significance relating to the natural and cultural history of the East Bay landscape. This field-trip guidebook was compiled to help commemorate the 140th anniversary of an estimated M 7.0 earthquake that occurred on the Hayward Fault at approximately 7:50 AM, October 21st, 1868. Although many reports and on-line resources have been compiled about the science and engineering associated with earthquakes on the Hayward Fault, this report has been prepared to serve as an outdoor guide to the fault for the interested public and for educators. The first chapter is a general overview of the geologic setting of the fault. This is followed by ten chapters of field trips to selected areas along the fault, or in the vicinity, where landscape, geologic, and man-made features that have relevance to understanding the nature of the fault and its earthquake history can be found. A glossary is provided to define and illustrate scientific term used throughout this guide. A ?green? theme helps conserve resources and promotes use of public transportation, where possible. Although access to all locations described in this guide is possible by car, alternative suggestions are provided. To help conserve paper, this guidebook is available on-line only; however, select pages or chapters (field trips) within this guide can be printed separately to take along on an excursion. The discussions in this paper highlight transportation alternatives to visit selected field trip locations. In some cases, combinations, such as a ride on BART and a bus, can be used instead of automobile transportation. For other locales, bicycles can be an alternative means of transportation. Transportation descriptions on selected pages are intended to help guide fieldtrip planners or participants choose trip

  8. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    NASA Astrophysics Data System (ADS)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

  9. Artificial neural network application for space station power system fault diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Oliver, Walter E.; Dias, Lakshman G.

    1995-01-01

    This study presents a methodology for fault diagnosis using a Two-Stage Artificial Neural Network Clustering Algorithm. Previously, SPICE models of a 5-bus DC power distribution system with assumed constant output power during contingencies from the DDCU were used to evaluate the ANN's fault diagnosis capabilities. This on-going study uses EMTP models of the components (distribution lines, SPDU, TPDU, loads) and power sources (DDCU) of Space Station Alpha's electrical Power Distribution System as a basis for the ANN fault diagnostic tool. The results from the two studies are contrasted. In the event of a major fault, ground controllers need the ability to identify the type of fault, isolate the fault to the orbital replaceable unit level and provide the necessary information for the power management expert system to optimally determine a degraded-mode load schedule. To accomplish these goals, the electrical power distribution system's architecture can be subdivided into three major classes: DC-DC converter to loads, DC Switching Unit (DCSU) to Main bus Switching Unit (MBSU), and Power Sources to DCSU. Each class which has its own electrical characteristics and operations, requires a unique fault analysis philosophy. This study identifies these philosophies as Riddles 1, 2 and 3 respectively. The results of the on-going study addresses Riddle-1. It is concluded in this study that the combination of the EMTP models of the DDCU, distribution cables and electrical loads yields a more accurate model of the behavior and in addition yielded more accurate fault diagnosis using ANN versus the results obtained with the SPICE models.

  10. Fault Analysis in Solar Photovoltaic Arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  11. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix

  12. Fault creep rates of the Chaman fault (Afghanistan and Pakistan) inferred from InSAR

    NASA Astrophysics Data System (ADS)

    Barnhart, William D.

    2017-01-01

    The Chaman fault is the major strike-slip structural boundary between the India and Eurasia plates. Despite sinistral slip rates similar to the North America-Pacific plate boundary, no major (>M7) earthquakes have been documented along the Chaman fault, indicating that the fault either creeps aseismically or is at a late stage in its seismic cycle. Recent work with remotely sensed interferometric synthetic aperture radar (InSAR) time series documented a heterogeneous distribution of fault creep and interseismic coupling along the entire length of the Chaman fault, including an 125 km long creeping segment and an 95 km long locked segment within the region documented in this study. Here I present additional InSAR time series results from the Envisat and ALOS radar missions spanning the southern and central Chaman fault in an effort to constrain the locking depth, dip, and slip direction of the Chaman fault. I find that the fault deviates little from a vertical geometry and accommodates little to no fault-normal displacements. Peak-documented creep rates on the fault are 9-12 mm/yr, accounting for 25-33% of the total motion between India and Eurasia, and locking depths in creeping segments are commonly shallower than 500 m. The magnitude of the 1892 Chaman earthquake is well predicted by the total area of the 95 km long coupled segment. To a first order, the heterogeneous distribution of aseismic creep combined with consistently shallow locking depths suggests that the southern and central Chaman fault may only produce small to moderate earthquakes (

  13. Aftershocks illuminate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    USGS Publications Warehouse

    Horton, J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  14. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    USGS Publications Warehouse

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  15. A diagnosis system using object-oriented fault tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, F. A.

    1990-01-01

    Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.

  16. Discover Earth

    NASA Technical Reports Server (NTRS)

    Steele, Colleen

    1998-01-01

    Discover Earth is a NASA-sponsored project for teachers of grades 5-12, designed to: (1) enhance understanding of the Earth as an integrated system; (2) enhance the interdisciplinary approach to science instruction; and (3) provide classroom materials that focus on those goals. Discover Earth is conducted by the Institute for Global Environmental Strategies in collaboration with Dr. Eric Barron, Director, Earth System Science Center, The Pennsylvania State University; and Dr. Robert Hudson, Chair, the Department of Meteorology, University of Maryland at College Park. The enclosed materials: (1) represent only part of the Discover Earth materials; (2) were developed by classroom teachers who are participating in the Discover Earth project; (3) utilize an investigative approach and on-line data; and (4) can be effectively adjusted to classrooms with greater/without technology access. The Discover Earth classroom materials focus on the Earth system and key issues of global climate change including topics such as the greenhouse effect, clouds and Earth's radiation balance, surface hydrology and land cover, and volcanoes and climate change. All the materials developed to date are available on line at (http://www.strategies.org) You are encouraged to submit comments and recommendations about these materials to the Discover Earth project manager, contact information is listed below. You are welcome to duplicate all these materials.

  17. Faulting processes in active faults - Evidences from TCDP and SAFOD drill core samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janssen, C.; Wirth, R.; Wenk, H. -R.

    The microstructures, mineralogy and chemistry of representative samples collected from the cores of the San Andreas Fault drill hole (SAFOD) and the Taiwan Chelungpu-Fault Drilling project (TCDP) have been studied using optical microscopy, TEM, SEM, XRD and XRF analyses. SAFOD samples provide a transect across undeformed host rock, the fault damage zone and currently active deforming zones of the San Andreas Fault. TCDP samples are retrieved from the principal slip zone (PSZ) and from the surrounding damage zone of the Chelungpu Fault. Substantial differences exist in the clay mineralogy of SAFOD and TCDP fault gouge samples. Amorphous material has beenmore » observed in SAFOD as well as TCDP samples. In line with previous publications, we propose that melt, observed in TCDP black gouge samples, was produced by seismic slip (melt origin) whereas amorphous material in SAFOD samples was formed by comminution of grains (crush origin) rather than by melting. Dauphiné twins in quartz grains of SAFOD and TCDP samples may indicate high seismic stress. The differences in the crystallographic preferred orientation of calcite between SAFOD and TCDP samples are significant. Microstructures resulting from dissolution–precipitation processes were observed in both faults but are more frequently found in SAFOD samples than in TCDP fault rocks. As already described for many other fault zones clay-gouge fabrics are quite weak in SAFOD and TCDP samples. Clay-clast aggregates (CCAs), proposed to indicate frictional heating and thermal pressurization, occur in material taken from the PSZ of the Chelungpu Fault, as well as within and outside of the SAFOD deforming zones, indicating that these microstructures were formed over a wide range of slip rates.« less

  18. Misbheaving Faults: The Expanding Role of Geodetic Imaging in Unraveling Unexpected Fault Slip Behavior

    NASA Astrophysics Data System (ADS)

    Barnhart, W. D.; Briggs, R.

    2015-12-01

    Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

  19. Risk management of PPP project in the preparation stage based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Xing, Yuanzhi; Guan, Qiuling

    2017-03-01

    The risk management of PPP(Public Private Partnership) project can improve the level of risk control between government departments and private investors, so as to make more beneficial decisions, reduce investment losses and achieve mutual benefit as well. Therefore, this paper takes the PPP project preparation stage venture as the research object to identify and confirm four types of risks. At the same time, fault tree analysis(FTA) is used to evaluate the risk factors that belong to different parts, and quantify the influencing degree of risk impact on the basis of risk identification. In addition, it determines the importance order of risk factors by calculating unit structure importance on PPP project preparation stage. The result shows that accuracy of government decision-making, rationality of private investors funds allocation and instability of market returns are the main factors to generate the shared risk on the project.

  20. Analysing fault growth at the continental break up zone in Afar, Ethiopia

    NASA Astrophysics Data System (ADS)

    Hofmann, Barbara; Wright, Tim; Rowland, Julie; Hautot, Sophie; Paton, Douglas; Kidane, Tesfaye; Abebe, Bekele

    2010-05-01

    Continental break up, the formation of new oceans still holds many unanswered questions. The continental rift of Afar, Ethiopia is the only place on Earth today where the final stages of continental rupture and the beginning of seafloor spreading are occurring above sea level. In September 2005 a new rifting episode started at the Dabbahu segment with the intrusion of about 2-2.5 km^ 3 of magma into a 60-km-long dyke (Wright et. al., 2006; Grandin et. al., 2009), causing horizontal opening of up to 8m. Faults within the research area show fresh slip of up to 3m along fault segments of about 10km (Rowland et. al., 2007). Since then 13 further dyke intrusions showing surface deformation have been detected and analysed using InSAR data. However, how faults grow remains a key question. To establish fault growth models, distribution of displacement along surface tracks as well as scaling relationships of faults of different order of magnitudes within a similar lithological setting are essential (eg. Walsh and Watterson, 1988; Cowie and Scholz, 1992). Set in Pliocene flood basalts the highly faulted Dabbahu segment forms an ideal study case. We used 6 pairs of SPOT5 images with a pixel size of 2.5m to create a relative DEM of 6m resolution covering the whole of the 60km x 30km Dabbahu segment. By tying the relative DEM to the georeferenced 90m resolution DEM from SRTM data and applying linear and bi-quadratic polynomial transformations we were able to georeference the DEM. During October 2009 a LiDAR survey took place over the central rift segment with additional cross profiles. The additional data has enhanced the DEM spatial resolution to 1m in the centre. Using this large, precise dataset we have developed an automated method to systematically derive the distribution of displacement along the surface expression of the faults. This enables us to determine whether scaling relationships derived in other areas are valid for magmatically-driven faults. Here we present

  1. Fault tolerant multi-sensor fusion based on the information gain

    NASA Astrophysics Data System (ADS)

    Hage, Joelle Al; El Najjar, Maan E.; Pomorski, Denis

    2017-01-01

    In the last decade, multi-robot systems are used in several applications like for example, the army, the intervention areas presenting danger to human life, the management of natural disasters, the environmental monitoring, exploration and agriculture. The integrity of localization of the robots must be ensured in order to achieve their mission in the best conditions. Robots are equipped with proprioceptive (encoders, gyroscope) and exteroceptive sensors (Kinect). However, these sensors could be affected by various faults types that can be assimilated to erroneous measurements, bias, outliers, drifts,… In absence of a sensor fault diagnosis step, the integrity and the continuity of the localization are affected. In this work, we present a muti-sensors fusion approach with Fault Detection and Exclusion (FDE) based on the information theory. In this context, we are interested by the information gain given by an observation which may be relevant when dealing with the fault tolerance aspect. Moreover, threshold optimization based on the quantity of information given by a decision on the true hypothesis is highlighted.

  2. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    NASA Astrophysics Data System (ADS)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  3. Comparison of the quench and fault current limiting characteristics of the flux-coupling type SFCL with single and three-phase transformer

    NASA Astrophysics Data System (ADS)

    Jung, Byung Ik; Cho, Yong Sun; Park, Hyoung Min; Chung, Dong Chul; Choi, Hyo Sang

    2013-01-01

    The South Korean power grid has a network structure for the flexible operation of the system. The continuously increasing power demand necessitated the increase of power facilities, which decreased the impedance in the power system. As a result, the size of the fault current in the event of a system fault increased. As this increased fault current size is threatening the breaking capacity of the circuit breaker, the main protective device, a solution to this problem is needed. The superconducting fault current limiter (SFCL) has been designed to address this problem. SFCL supports the stable operation of the circuit breaker through its excellent fault-current-limiting operation [1-5]. In this paper, the quench and fault current limiting characteristics of the flux-coupling-type SFCL with one three-phase transformer were compared with those of the same SFCL type but with three single-phase transformers. In the case of the three-phase transformers, both the superconducting elements of the fault and sound phases were quenched, whereas in the case of the single-phase transformer, only that of the fault phase was quenched. For the fault current limiting rate, both cases showed similar rates for the single line-to-ground fault, but for the three-wire earth fault, the fault current limiting rate of the single-phase transformer was over 90% whereas that of the three-phase transformer was about 60%. It appears that when the three-phase transformer was used, the limiting rate decreased because the fluxes by the fault current of each phase were linked in one core. When the power loads of the superconducting elements were compared by fault type, the initial (half-cycle) load was great when the single-phase transformer was applied, whereas for the three-phase transformer, its power load was slightly lower at the initial stage but became greater after the half fault cycle.

  4. The U.S. National Plan for Civil Earth Observations

    NASA Astrophysics Data System (ADS)

    Stryker, T.; Clavin, C.; Gallo, J.

    2014-12-01

    Globally, the United Sates Government is one of the largest providers of environmental and Earth-system data. As the nation's Earth observation capacity has grown, so have the complexity and challenges associated with managing Earth observation systems and related data holdings. In July 2014, the White House Office of Science and Technology Policy released the first-ever National Plan for Civil Earth Observations to address these challenges. The Plan provides a portfolio management-based framework for maximizing the value of Federal Earth observations. The Plan identifies Federal priorities for Earth observations and improved management of their data. Through routine assessments, expanding data management efforts, interagency planning, and international collaboration, OSTP and its partner agencies will seek ensure the continued provision of and access to key Earth observation data, which support a broad range of public services and research programs. The presenters will provide a detailed review of the components of the National Plan, its impacts across the Federal agencies involved in Earth observations, and associated efforts to enable interagency coordination.

  5. Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea

    USGS Publications Warehouse

    Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham

    2011-01-01

    The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

  6. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  7. Development, Interaction and Linkage of Normal Fault Segments along the 100-km Bilila-Mtakataka Fault, Malawi

    NASA Astrophysics Data System (ADS)

    Fagereng, A.; Hodge, M.; Biggs, J.; Mdala, H. S.; Goda, K.

    2016-12-01

    Faults grow through the interaction and linkage of isolated fault segments. Continuous fault systems are those where segments interact, link and may slip synchronously, whereas non-continuous fault systems comprise isolated faults. As seismic moment is related to fault length (Wells and Coppersmith, 1994), understanding whether a fault system is continuous or not is critical in evaluating seismic hazard. Maturity may be a control on fault continuity: immature, low displacement faults are typically assumed to be non-continuous. Here, we study two overlapping, 20 km long, normal fault segments of the N-S striking Bilila-Mtakataka fault, Malawi, in the southern section of the East African Rift System. Despite its relative immaturity, previous studies concluded the Bilila-Mtakataka fault is continuous for its entire 100 km length, with the most recent event equating to an Mw8.0 earthquake (Jackson and Blenkinsop, 1997). We explore whether segment geometry and relationship to pre-existing high-grade metamorphic foliation has influenced segment interaction and fault development. Fault geometry and scarp height is constrained by DEMs derived from SRTM, Pleiades and `Structure from Motion' photogrammetry using a UAV, alongside direct field observations. The segment strikes differ on average by 10°, but up to 55° at their adjacent tips. The southern segment is sub-parallel to the foliation, whereas the northern segment is highly oblique to the foliation. Geometrical surface discontinuities suggest two isolated faults; however, displacement-length profiles and Coulomb stress change models suggest segment interaction, with potential for linkage at depth. Further work must be undertaken on other segments to assess the continuity of the entire fault, concluding whether an earthquake greater than that of the maximum instrumentally recorded (1910 M7.4 Rukwa) is possible.

  8. Global strike-slip fault distribution on Enceladus reveals mostly left-lateral faults

    NASA Astrophysics Data System (ADS)

    Martin, E. S.; Kattenhorn, S. A.

    2013-12-01

    Within the outer solar system, normal faults are a dominant tectonic feature; however, strike-slip faults have played a role in modifying the surfaces of many icy bodies, including Europa, Ganymede, and Enceladus. Large-scale tectonic deformation in icy shells develops in response to stresses caused by a range of mechanisms including polar wander, despinning, volume changes, orbital recession/decay, diurnal tides, and nonsynchronous rotation (NSR). Icy shells often preserve this record of tectonic deformation as patterns of fractures that can be used to identify the source of stress responsible for creating the patterns. Previously published work on Jupiter's moon Europa found that right-lateral strike-slip faults predominantly formed in the southern hemisphere and left-lateral strike-slip faults in the northern hemisphere. This pattern suggested they were formed in the past by stresses induced by diurnal tidal forcing, and were then rotated into their current longitudinal positions by NSR. We mapped the distribution of strike-slip faults on Enceladus and used kinematic indicators, including tailcracks and en echelon fractures, to determine their sense of slip. Tailcracks are secondary fractures that form as a result of concentrations of stress at the tips of slipping faults with geometric patterns dictated by the slip sense. A total of 31 strike-slip faults were identified, nine of which were right-lateral faults, all distributed in a seemingly random pattern across Enceladus's surface, in contrast to Europa. Additionally, there is a dearth of strike-slip faults within the tectonized terrains centered at 90°W and within the polar regions north and south of 60°N and 60°S, respectively. The lack of strike-slip faults in the north polar region may be explained, in part, by limited data coverage. The south polar terrain (SPT), characterized by the prominent tiger stripes and south polar dichotomy, yielded no discrete strike-slip faults. This does not suggest that

  9. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    NASA Astrophysics Data System (ADS)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  10. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  11. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less

  12. Illite authigenesis during faulting and fluid flow - a microstructural study of fault rocks

    NASA Astrophysics Data System (ADS)

    Scheiber, Thomas; Viola, Giulio; van der Lelij, Roelant; Margreth, Annina

    2017-04-01

    Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (<0.1 µm) fraction in the early to middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro

  13. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  14. Spatiotemporal patterns of fault slip rates across the Central Sierra Nevada frontal fault zone

    NASA Astrophysics Data System (ADS)

    Rood, Dylan H.; Burbank, Douglas W.; Finkel, Robert C.

    2011-01-01

    Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection

  15. Spatiotemporal Patterns of Fault Slip Rates Across the Central Sierra Nevada Frontal Fault Zone

    NASA Astrophysics Data System (ADS)

    Rood, D. H.; Burbank, D.; Finkel, R. C.

    2010-12-01

    We examine patterns in fault slip rates through time and space across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38-39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and Be-10 surface exposure dating, we define mean fault slip rates, and by utilizing markers of different ages (generally, ~20 ka and ~150 ka), we examine rates through time and interactions among multiple faults over 10-100 ky timescales. At each site for which data are available for the last ~150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~20 ky and ~150 ky timescales): 0.3 ± 0.1 mm/yr (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 +0.3/-0.1 mm/yr along the West Fork of the Carson River at Woodfords. Our data permit that rates are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~20 km between the northern Mono Basin (1.3 +0.6/-0.3 mm/yr at Lundy Canyon site) and the Bridgeport Basin (0.3 ± 0.1 mm/yr). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin reflects a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveal that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection, extension is accommodated within a diffuse zone of

  16. Late Quaternary Faulting in Southeastern Louisiana: A Natural Laboratory for Understanding Shallow Faulting in Deltaic Materials

    NASA Astrophysics Data System (ADS)

    Dawers, N. H.; McLindon, C.

    2017-12-01

    A synthesis of late Quaternary faults within the Mississippi River deltaic plain aims to provide a more accurate assessment of regional and local fault architecture, and interactions between faulting, sediment loading, salt withdrawal and compaction. This effort was initiated by the New Orleans Geological Society and has resulted in access to industry 3d seismic reflection data, as well as fault trace maps, and various types of well data and biostratigraphy. An unexpected outgrowth of this project is a hypothesis that gravity-driven normal faults in deltaic settings may be good candidates for shallow aseismic and slow-slip phenomena. The late Quaternary fault population is characterized by several large, highly segmented normal fault arrays: the Baton Rouge-Tepetate fault zone, the Lake Pontchartrain-Lake Borgne fault zone, the Golden Meadow fault zone (GMFZ), and a major counter-regional salt withdrawal structure (the Bay Marchand-Timbalier Bay-Caillou Island salt complex and West Delta fault zone) that lies just offshore of southeastern Louisiana. In comparison to the other, more northerly fault zones, the GMFZ is still significantly salt-involved. Salt structures segment the GMFZ with fault tips ending near or within salt, resulting in highly localized fault and compaction related subsidence separated by shallow salt structures, which are inherently buoyant and virtually incompressible. At least several segments within the GMFZ are characterized by marsh breaks that formed aseismically over timescales of days to months, such as near Adams Bay and Lake Enfermer. One well-documented surface rupture adjacent to a salt dome propagated over a 3 day period in 1943. We suggest that Louisiana's coastal faults make excellent analogues for deltaic faults in general, and propose that a series of positive feedbacks keep them active in the near surface. These include differential sediment loading and compaction, weak fault zone materials, high fluid pressure, low elastic

  17. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  18. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    NASA Astrophysics Data System (ADS)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  19. Fault pattern at the northern end of the Death Valley - Furnace Creek fault zone, California and Nevada

    NASA Technical Reports Server (NTRS)

    Liggett, M. A. (Principal Investigator); Childs, J. F.

    1974-01-01

    The author has identified the following significant results. The pattern of faulting associated with the termination of the Death Valley-Furnace Creek Fault Zone in northern Fish Lake Valley, Nevada was studied in ERTS-1 MSS color composite imagery and color IR U-2 photography. Imagery analysis was supported by field reconnaissance and low altitude aerial photography. The northwest-trending right-lateral Death Valley-Furnace Creek Fault Zone changes northward to a complex pattern of discontinuous dip slip and strike slip faults. This fault pattern terminates to the north against an east-northeast trending zone herein called the Montgomery Fault Zone. No evidence for continuation of the Death Valley-Furnace Creek Fault Zone is recognized north of the Montgomery Fault Zone. Penecontemporaneous displacement in the Death Valley-Furnace Creek Fault Zone, the complex transitional zone, and the Montgomery Fault Zone suggests that the systems are genetically related. Mercury mineralization appears to have been localized along faults recognizable in ERTS-1 imagery within the transitional zone and the Montgomery Fault Zone.

  20. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Principal fault displacements -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Tonagi, M.

    2016-12-01

    The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault

  1. EarthChem and SESAR: Data Resources and Interoperability for EarthScope Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Walker, D.; Block, K.; Vinay, S.; Ash, J.

    2008-12-01

    Data management within the EarthScope Cyberinfrastructure needs to pursue two goals in order to advance and maximize the broad scientific application and impact of the large volumes of observational data acquired by EarthScope facilities: (a) to provide access to all data acquired by EarthScope facilities, and to promote their use by broad audiences, and (b) to facilitate discovery of, access to, and integration of multi-disciplinary data sets that complement EarthScope data in support of EarthScope science. EarthChem and SESAR, the System for Earth Sample Registration, are two projects within the Geoinformatics for Geochemistry program that offer resources for EarthScope CI. EarthChem operates a data portal that currently provides access to >13 million analytical values for >600,000 samples, more than half of which are from North America, including data from the USGS and all data from the NAVDAT database, a web-accessible repository for age, chemical and isotopic data from Mesozoic and younger igneous rocks in western North America. The new EarthChem GEOCHRON database will house data collected in association with GeoEarthScope, storing and serving geochronological data submitted by participating facilities. The EarthChem Deep Lithosphere Dataset is a compilation of petrological data for mantle xenoliths, initiated in collaboration with GeoFrame to complement geophysical endeavors within EarthScope science. The EarthChem Geochemical Resource Library provides a home for geochemical and petrological data products and data sets. Parts of the digital data in EarthScope CI refer to physical samples such as drill cores, igneous rocks, or water and gas samples, collected, for example, by SAFOD or by EarthScope science projects and acquired through lab-based analysis. Management of sample-based data requires the use of global unique identifiers for samples, so that distributed data for individual samples generated in different labs and published in different papers can be

  2. Satellite Detection of the Convection Generated Stresses in Earth

    NASA Technical Reports Server (NTRS)

    Liu, Han-Shou; Kolenkiewicz, Ronald; Li, Jin-Ling; Chen, Jiz-Hong

    2003-01-01

    We review research developments on satellite detection of the convection generated stresses in the Earth for seismic hazard assessment and Earth resource survey. Particular emphasis is laid upon recent progress and results of stress calculations from which the origin and evolution of the tectonic features on Earth's surface can be scientifically addressed. An important aspect of the recent research development in tectonic stresses relative to earthquakes is the implications for earthquake forecasting and prediction. We have demonstrated that earthquakes occur on the ring of fire around the Pacific in response to the tectonic stresses induced by mantle convection. We propose a systematic global assessment of the seismic hazard based on variations of tectonic stresses in the Earth as observed by satellites. This space geodynamic approach for assessing the seismic hazard is unique in that it can pinpoint the triggering stresses for large earthquakes without ambiguities of geological structures, fault geometries, and other tectonic properties. Also, it is distinct from the probabilistic seismic hazard assessment models in the literature, which are based only on extrapolations of available earthquake data.

  3. On-line early fault detection and diagnosis of municipal solid waste incinerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Jinsong; Huang Jianchao; Sun Wei

    A fault detection and diagnosis framework is proposed in this paper for early fault detection and diagnosis (FDD) of municipal solid waste incinerators (MSWIs) in order to improve the safety and continuity of production. In this framework, principal component analysis (PCA), one of the multivariate statistical technologies, is used for detecting abnormal events, while rule-based reasoning performs the fault diagnosis and consequence prediction, and also generates recommendations for fault mitigation once an abnormal event is detected. A software package, SWIFT, is developed based on the proposed framework, and has been applied in an actual industrial MSWI. The application shows thatmore » automated real-time abnormal situation management (ASM) of the MSWI can be achieved by using SWIFT, resulting in an industrially acceptable low rate of wrong diagnosis, which has resulted in improved process continuity and environmental performance of the MSWI.« less

  4. Building Thematic and Integrated Services for European Solid Earth Sciences: the EPOS Integrated Approach

    NASA Astrophysics Data System (ADS)

    Harrison, M.; Cocco, M.

    2017-12-01

    EPOS (European Plate Observing System) has been designed with the vision of creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the EPOS mission is to integrate the diverse and advanced European Research Infrastructures for solid Earth science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. To accomplish its mission, EPOS is engaging different stakeholders, to allow the Earth sciences to open new horizons in our understanding of the planet. EPOS also aims at contributing to prepare society for geo-hazards and to responsibly manage the exploitation of geo-resources. Through integration of data, models and facilities, EPOS will allow the Earth science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and human welfare. The research infrastructures (RIs) that EPOS is coordinating include: i) distributed geophysical observing systems (seismological and geodetic networks); ii) local observatories (including geomagnetic, near-fault and volcano observatories); iii) analytical and experimental laboratories; iv) integrated satellite data and geological information services; v) new services for natural and anthropogenic hazards; vi) access to geo-energy test beds. Here we present the activities planned for the implementation phase focusing on the TCS, the ICS and on their interoperability. We will discuss the data, data-products, software and services (DDSS) presently under

  5. Stress sensitivity of fault seismicity: A comparison between limited-offset oblique and major strike-slip faults

    USGS Publications Warehouse

    Parsons, T.; Stein, R.S.; Simpson, R.W.; Reasenberg, P.A.

    1999-01-01

    We present a new three-dimensional inventory of the southern San Francisco Bay area faults and use it to calculate stress applied principally by the 1989 M = 7.1 Loma Prieta earthquake and to compare fault seismicity rates before and after 1989. The major high-angle right-lateral faults exhibit a different response to the stress change than do minor oblique (right-lateral/thrust) faults. Seismicity on oblique-slip faults in the southern Santa Clara Valley thrust belt increased where the faults were unclamped. The strong dependence of seismicity change on normal stress change implies a high coefficient of static friction. In contrast, we observe that faults with significant offset (>50-100 km) behave differently; microseismicity on the Hayward fault diminished where right-lateral shear stress was reduced and where it was unclamped by the Loma Prieta earthquake. We observe a similar response on the San Andreas fault zone in southern California after the Landers earthquake sequence. Additionally, the offshore San Gregorio fault shows a seismicity rate increase where right-lateral/oblique shear stress was increased by the Loma Prieta earthquake despite also being clamped by it. These responses are consistent with either a low coefficient of static friction or high pore fluid pressures within the fault zones. We can explain the different behavior of the two styles of faults if those with large cumulative offset become impermeable through gouge buildup; coseismically pressurized pore fluids could be trapped and negate imposed normal stress changes, whereas in more limited offset faults, fluids could rapidly escape. The difference in behavior between minor and major faults may explain why frictional failure criteria that apply intermediate coefficients of static friction can be effective in describing the broad distributions of aftershocks that follow large earthquakes, since many of these events occur both inside and outside major fault zones.

  6. Deformation associated with continental normal faults

    NASA Astrophysics Data System (ADS)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  7. Active fault databases: building a bridge between earthquake geologists and seismic hazard practitioners, the case of the QAFI v.3 database

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João

    2017-08-01

    Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.

  8. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    USGS Publications Warehouse

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  9. Intelligent fault management for the Space Station active thermal control system

    NASA Technical Reports Server (NTRS)

    Hill, Tim; Faltisco, Robert M.

    1992-01-01

    The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.

  10. Relationship between displacement and gravity change of Uemachi faults and surrounding faults of Osaka basin, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Inoue, N.; Kitada, N.; Kusumoto, S.; Itoh, Y.; Takemura, K.

    2011-12-01

    The Osaka basin surrounded by the Rokko and Ikoma Ranges is one of the typical Quaternary sedimentary basins in Japan. The Osaka basin has been filled by the Pleistocene Osaka group and the later sediments. Several large cities and metropolitan areas, such as Osaka and Kobe are located in the Osaka basin. The basin is surrounded by E-W trending strike slip faults and N-S trending reverse faults. The N-S trending 42-km-long Uemachi faults traverse in the central part of the Osaka city. The Uemachi faults have been investigated for countermeasures against earthquake disaster. It is important to reveal the detailed fault parameters, such as length, dip and recurrence interval, so on for strong ground motion simulation and disaster prevention. For strong ground motion simulation, the fault model of the Uemachi faults consist of the two parts, the north and south parts, because of the no basement displacement in the central part of the faults. The Ministry of Education, Culture, Sports, Science and Technology started the project to survey of the Uemachi faults. The Disaster Prevention Institute of Kyoto University is carried out various surveys from 2009 to 2012 for 3 years. The result of the last year revealed the higher fault activity of the branch fault than main faults in the central part (see poster of "Subsurface Flexure of Uemachi Fault, Japan" by Kitada et al., in this meeting). Kusumoto et al. (2001) reported that surrounding faults enable to form the similar basement relief without the Uemachi faults model based on a dislocation model. We performed various parameter studies for dislocation model and gravity changes based on simplified faults model, which were designed based on the distribution of the real faults. The model was consisted 7 faults including the Uemachi faults. The dislocation and gravity change were calculated based on the Okada et al. (1985) and Okubo et al. (1993) respectively. The results show the similar basement displacement pattern to the

  11. Experimental investigation into the fault response of superconducting hybrid electric propulsion electrical power system to a DC rail to rail fault

    NASA Astrophysics Data System (ADS)

    Nolan, S.; Jones, C. E.; Munro, R.; Norman, P.; Galloway, S.; Venturumilli, S.; Sheng, J.; Yuan, W.

    2017-12-01

    Hybrid electric propulsion aircraft are proposed to improve overall aircraft efficiency, enabling future rising demands for air travel to be met. The development of appropriate electrical power systems to provide thrust for the aircraft is a significant challenge due to the much higher required power generation capacity levels and complexity of the aero-electrical power systems (AEPS). The efficiency and weight of the AEPS is critical to ensure that the benefits of hybrid propulsion are not mitigated by the electrical power train. Hence it is proposed that for larger aircraft (~200 passengers) superconducting power systems are used to meet target power densities. Central to the design of the hybrid propulsion AEPS is a robust and reliable electrical protection and fault management system. It is known from previous studies that the choice of protection system may have a significant impact on the overall efficiency of the AEPS. Hence an informed design process which considers the key trades between choice of cable and protection requirements is needed. To date the fault response of a voltage source converter interfaced DC link rail to rail fault in a superconducting power system has only been investigated using simulation models validated by theoretical values from the literature. This paper will present the experimentally obtained fault response for a variety of different types of superconducting tape for a rail to rail DC fault. The paper will then use these as a platform to identify key trades between protection requirements and cable design, providing guidelines to enable future informed decisions to optimise hybrid propulsion electrical power system and protection design.

  12. 3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography

    NASA Astrophysics Data System (ADS)

    Withers, Kyle; Moschetti, Morgan

    2017-04-01

    Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.

  13. Stacking faults density driven collapse of magnetic energy in hcp-cobalt nano-magnets

    NASA Astrophysics Data System (ADS)

    Nong, H. T. T.; Mrad, K.; Schoenstein, F.; Piquemal, J.-Y.; Jouini, N.; Leridon, B.; Mercone, S.

    2017-06-01

    Cobalt nanowires with different shape parameters were synthesized via the polyol process. By calculating the magnetic energy product (BH max) both for dried nano-powder and for nanowires in their synthesis solution, we observed unexpected independent BH max values from the nanowires shape. A good alignment of the nanowires leads to a higher BH max value. Our results show that the key parameter driving the magnetic energy product of the cobalt nanowires is the stacking fault density. An exponential collapse of the magnetic energy is observed at very low percentage of structural faults. Cobalt nanowires with almost perfect hcp crystalline structures should present high magnetic energy, which is promising for application in rare earth-free permanent magnets. Oral talk at 8th International Workshop on Advanced Materials Science and Nanotechnology (IWAMSN2016), 8-12 November 2016, Ha Long City, Vietnam.

  14. Seismicity of the Earth 1900-2007

    USGS Publications Warehouse

    Tarr, Arthur C.; Villaseñor, Antonio; Furlong, Kevin P.; Rhea, Susan; Benz, Harley M.

    2010-01-01

    This map illustrates more than one century of global seismicity in the context of global plate tectonics and the Earth's physiography. Primarily designed for use by earth scientists and engineers interested in earthquake hazards of the 20th and early 21st centuries, this map provides a comprehensive overview of strong earthquakes since 1900. The map clearly identifies the location of the 'great' earthquakes (M8.0 and larger) and the rupture area, if known, of the M8.3 or larger earthquakes. The earthquake symbols are scaled proportional to the moment magnitude and therefore to the area of faulting, thus providing a better understanding of the relative sizes and distribution of earthquakes in the magnitude range 5.5 to 9.5. Plotting the known rupture area of the largest earthquakes also provides a better appreciation of the extent of some of the most famous and damaging earthquakes in modern history. All earthquakes shown on the map were carefully relocated using a standard earth reference model and standardized location procedures, thereby eliminating gross errors and biases in locations of historically important earthquakes that are often found in numerous seismicity catalogs.

  15. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  16. Structural Analysis of the Pärvie Fault in Northern Scandinavia

    NASA Astrophysics Data System (ADS)

    Baeckstroem, A.; Rantakokko, N.; Ask, M. V.

    2011-12-01

    governs the destabilization of a structure, such as the Pärvie fault, rather than the induced stresses from the weight of ice-sheet (Lund, 2005). This is a presentation of the first part of the structural analysis of the brittle structures around the Pärvie fault in order to evaluate its brittle deformation history and to attempt to constrain the paleostress fields causing these deformations. References Kukkonen, I.T., Olesen, O., Ask, M.V.S., and the PFDP Working Group, 2010. Postglacial faults in Fennoscandia: targets for scientific drilling. GFF, 132:71-81. Kukkonen, I.T., Ask, M.V.S., Olesen, O., 2011. Postglacial Fault Drilling in Northern Europe: Workshop in Skokloster, Sweden. Scientific Drilling, 11, doi:10.2204/iodp.sd.11.08.2011. Lagerbäck, R. & Sundh, M., 2008. Early Holocene faulting and paleoseismicity in northern Sweden. Geological survey of Sweden. Research paper, C 836. 80 p. Lund, B., Schmidt, P., Hieronymus, C., 2009. Stress evolution and fault stability during the Weichselian glacial cycle. Swedish Nuclear Fuel and Waste Management Co., Stockholm. TR-09-15. 106 p. Riad, L., 1990. The Pärvie fault, Northern Sweden, Uppsala University. Research report 63. 48 p

  17. Surface fault rupture during the Mw 7.8 Kaikoura earthquake, New Zealand, with specific comment on the Kekerengu Fault - one of the country's fastest slipping onland active faults

    NASA Astrophysics Data System (ADS)

    Van Dissen, Russ; Little, Tim

    2017-04-01

    The Mw 7.8 Kaikoura earthquake of 14 November, 2016 (NZDT) was a complex event. It involved ground-surface (or seafloor) fault rupture on at least a dozen onland or offshore faults, and subsurface rupture on a handful of additional faults. Most of the surface ruptures involved previously known (or suspected) active faults, as well as surface rupture on at least two hitherto unrecognised active faults. The southwest to northeast extent of surface fault rupture, as generalised by two straight-line segments, is approximately 180 km, though this is a minimum for the collective length of surface rupture due to multiple overlapping faults with various orientations. Surface rupture displacements on specific faults involved in the Kaikoura Earthquake span approximately two orders of magnitude. For example, maximum surface displacement on the Heaver's Creek Fault is cm- to dm-scale in size; whereas, maximum surface displacement on the nearby Kekerengu Fault is approximately 10-12 m (predominantly in a dextral sense). The Kekerengu Fault has a Late Pleistocene slip-rate rate of 20-26 mm/yr, and is possibly the second fastest slipping onland fault in New Zealand, behind the Alpine Fault. Located in the northeastern South Island of New Zealand, the Kekerengu Fault - along with the Hope Fault to the southwest and the Needles Fault offshore to the northeast - comprise the fastest slipping elements of the Pacific-Australian plate boundary in this part of the country. In January 2016 (about ten months prior to the Kaikoura earthquake) three paleo-earthquake investigation trenches were excavated across pronounced traces of the Kekerengu Fault at two locations. These were the first such trenches dug and evaluated across the fault. All three trenches displayed abundant evidence of past surface fault ruptures (three surface ruptures in the last approximately 1,200 years, four now including the 2016 rupture). An interesting aspect of the 2016 rupture is that two of the trenches

  18. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  19. Mechanics of slip and fracture along small faults and simple strike-slip fault zones in granitic rock

    NASA Astrophysics Data System (ADS)

    Martel, Stephen J.; Pollard, David D.

    1989-07-01

    We exploit quasi-static fracture mechanics models for slip along pre-existing faults to account for the fracture structure observed along small exhumed faults and small segmented fault zones in the Mount Abbot quadrangle of California and to estimate stress drop and shear fracture energy from geological field measurements. Along small strike-slip faults, cracks that splay from the faults are common only near fault ends. In contrast, many cracks splay from the boundary faults at the edges of a simple fault zone. Except near segment ends, the cracks preferentially splay into a zone. We infer that shear displacement discontinuities (slip patches) along a small fault propagated to near the fault ends and caused fracturing there. Based on elastic stress analyses, we suggest that slip on one boundary fault triggered slip on the adjacent boundary fault, and that the subsequent interaction of the slip patches preferentially led to the generation of fractures that splayed into the zones away from segment ends and out of the zones near segment ends. We estimate the average stress drops for slip events along the fault zones as ˜1 MPa and the shear fracture energy release rate during slip as 5 × 102 - 2 × 104 J/m2. This estimate is similar to those obtained from shear fracture of laboratory samples, but orders of magnitude less than those for large fault zones. These results suggest that the shear fracture energy release rate increases as the structural complexity of fault zones increases.

  20. TU-AB-BRD-03: Fault Tree Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunscombe, P.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn

  1. Late Holocene earthquakes on the Toe Jam Hill fault, Seattle fault zone, Bainbridge Island, Washington

    USGS Publications Warehouse

    Nelson, A.R.; Johnson, S.Y.; Kelsey, H.M.; Wells, R.E.; Sherrod, B.L.; Pezzopane, S.K.; Bradley, L.A.; Koehler, R. D.; Bucknam, R.C.

    2003-01-01

    Five trenches across a Holocene fault scarp yield the first radiocarbon-measured earthquake recurrence intervals for a crustal fault in western Washington. The scarp, the first to be revealed by laser imagery, marks the Toe Jam Hill fault, a north-dipping backthrust to the Seattle fault. Folded and faulted strata, liquefaction features, and forest soil A horizons buried by hanging-wall-collapse colluvium record three, or possibly four, earthquakes between 2500 and 1000 yr ago. The most recent earthquake is probably the 1050-1020 cal. (calibrated) yr B.P. (A.D. 900-930) earthquake that raised marine terraces and triggered a tsunami in Puget Sound. Vertical deformation estimated from stratigraphic and surface offsets at trench sites suggests late Holocene earthquake magnitudes near M7, corresponding to surface ruptures >36 km long. Deformation features recording poorly understood latest Pleistocene earthquakes suggest that they were smaller than late Holocene earthquakes. Postglacial earthquake recurrence intervals based on 97 radiocarbon ages, most on detrital charcoal, range from ???12,000 yr to as little as a century or less; corresponding fault-slip rates are 0.2 mm/yr for the past 16,000 yr and 2 mm/yr for the past 2500 yr. Because the Toe Jam Hill fault is a backthrust to the Seattle fault, it may not have ruptured during every earthquake on the Seattle fault. But the earthquake history of the Toe Jam Hill fault is at least a partial proxy for the history of the rest of the Seattle fault zone.

  2. Sensor Webs in Digital Earth

    NASA Astrophysics Data System (ADS)

    Heavner, M. J.; Fatland, D. R.; Moeller, H.; Hood, E.; Schultz, M.

    2007-12-01

    The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). From power systems and instrumentation through data management, visualization, education, and public outreach, SEAMONSTER is designed with modularity in mind. We are utilizing virtual earth infrastructures to enhance both sensor web management and data access. We will describe how the design philosophy of using open, modular components contributes to the exploration of different virtual earth environments. We will also describe the sensor web physical implementation and how the many components have corresponding virtual earth representations. This presentation will provide an example of the integration of sensor webs into a virtual earth. We suggest that IPY sensor networks and sensor webs may integrate into virtual earth systems and provide an IPY legacy easily accessible to both scientists and the public. SEAMONSTER utilizes geobrowsers for education and public outreach, sensor web management, data dissemination, and enabling collaboration. We generate near-real-time auto-updating geobrowser files of the data. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers have made this project possible.

  3. The July 11, 1995 Myanmar-China earthquake: A representative event in the bookshelf faulting system of southeastern Asia observed from JERS-1 SAR images

    NASA Astrophysics Data System (ADS)

    Ji, Lingyun; Wang, Qingliang; Xu, Jing; Ji, Cunwei

    2017-03-01

    On July 11, 1995, an Mw 6.8 earthquake struck eastern Myanmar near the Chinese border; hereafter referred to as the 1995 Myanmar-China earthquake. Coseismic surface displacements associated with this event are identified from JERS-1 (Japanese Earth Resources Satellite-1) SAR (Synthetic Aperture Radar) images. The largest relative displacement reached 60 cm in the line-of-sight direction. We speculate that a previously unrecognized dextral strike-slip subvertical fault striking NW-SE was responsible for this event. The coseismic slip distribution on the fault planes is inverted based on the InSAR-derived deformation. The results indicate that the fault slip was confined to two lobes. The maximum slip reached approximately 2.5 m at a depth of 5 km in the northwestern part of the focal region. The inverted geodetic moment was approximately Mw = 6.69, which is consistent with seismological results. The 1995 Myanmar-China earthquake is one of the largest recorded earthquakes that has occurred around the "bookshelf faulting" system between the Sagaing fault in Myanmar and the Red River fault in southwestern China.

  4. The influence of topographic stresses on faulting, emphasizing the 2008 Wenchuan, China earthquake rupture

    NASA Astrophysics Data System (ADS)

    Styron, R. H.; Hetland, E. A.; Zhang, G.

    2013-12-01

    The weight of large mountains produces stresses in the crust that locally may be on the order of tectonic stresses (10-100 MPa). These stresses have a significant and spatially-variable deviatoric component that may be resolved as strong normal and shear stresses on range-bounding faults. In areas of high relief, the shear stress on faults can be comparable to inferred stress drops in earthquakes, and fault-normal stresses may be greater than 50 MPa, and thus may potentially influence fault rupture. Additionally, these stresses may be used to make inferences about the orientation and magnitude of tectonic stresses, for example by indicating a minimum stress needed to be overcome by tectonic stress. We are studying these effects in several tectonic environments, such as the Longmen Shan (China), the Denali fault (Alaska, USA) and the Wasatch Fault Zone (Utah, USA). We calculate the full topographic stress tensor field in the crust in a study region by convolution of topography with Green's functions approximating stresses from a point load on the surface of an elastic halfspace, using the solution proposed by Liu and Zoback [1992]. The Green's functions are constructed from Boussinesq's solutions for a vertical point load on an elastic halfspace, as well as Cerruti's solutions for a horizontal surface point load, accounting for irregular surface boundary and topographic spreading forces. The stress tensor field is then projected onto points embedded in the halfspace representing the faults, and the fault normal and shear stresses at each point are calculated. Our primary focus has been on the 2008 Wenchuan earthquake, as this event occurred at the base of one of Earth's highest and steepest topographic fronts and had a complex and well-studied coseismic slip distribution, making it an ideal case study to evaluate topographic influence on faulting. We calculate the topographic stresses on the Beichuan and Pengguan faults, and compare the results to the coseismic slip

  5. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained

  6. Rupture Dynamics and Ground Motion from Earthquakes on Rough Faults in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Bydlon, S. A.; Kozdon, J. E.; Duru, K.; Dunham, E. M.

    2013-12-01

    Heterogeneities in the material properties of Earth's crust scatter propagating seismic waves. The effects of scattered waves are reflected in the seismic coda and depend on the amplitude of the heterogeneities, spatial arrangement, and distance from source to receiver. In the vicinity of the fault, scattered waves influence the rupture process by introducing fluctuations in the stresses driving propagating ruptures. Further variability in the rupture process is introduced by naturally occurring geometric complexity of fault surfaces, and the stress changes that accompany slip on rough surfaces. Our goal is to better understand the origin of complexity in the earthquake source process, and to quantify the relative importance of source complexity and scattering along the propagation path in causing incoherence of high frequency ground motion. Using a 2D high order finite difference rupture dynamics code, we nucleate ruptures on either flat or rough faults that obey strongly rate-weakening friction laws. These faults are embedded in domains with spatially varying material properties characterized by Von Karman autocorrelation functions and their associated power spectral density functions, with variations in wave speed of approximately 5 to 10%. Flat fault simulations demonstrate that off-fault material heterogeneity, at least with this particular form and amplitude, has only a minor influence on the rupture process (i.e., fluctuations in slip and rupture velocity). In contrast, ruptures histories on rough faults in both homogeneous and heterogeneous media include much larger short-wavelength fluctuations in slip and rupture velocity. We therefore conclude that source complexity is dominantly influenced by fault geometric complexity. To examine contributions of scattering versus fault geometry on ground motions, we compute spatially averaged root-mean-square (RMS) acceleration values as a function of fault perpendicular distance for a homogeneous medium and several

  7. Constraints on Fault Permeability from Helium and Heat Flow in the Los Angeles Basin

    NASA Astrophysics Data System (ADS)

    Garven, G.; Boles, J. R.

    2016-12-01

    Faults have profound controls on fluid flow in the Earth's crust. Faults affect the diagenesis of sediments, the migration of brines and petroleum, and the dynamics of hydrothermal mineralization. In southern California, the migration of petroleum and noble gases can be used to constrain fault permeability at both the formation and crustal scale. In the Los Angeles Basin, mantle-derived helium is a significant component of casing gas from deep production wells along the Newport-Inglewood Fault zone (NIFZ). Helium isotope ratios are as high as 5.3 Ra, indicating up to 66% mantle contribution along parts of this strike-slip fault zone (Boles et al., 2015). The 3He inversely correlates with CO2, a potential magmatic-derived carrier gas, and the d13C of the CO2 in the 3He rich samples is between 0 and -10 per mil, suggesting a mantle influence. The strong mantle-helium signal along the NIFZ is surprising, considering that the fault is currently in a transpressional state of stress (rather than extensional), has no history of recent magma emplacement, and lacks high geothermal gradients. Structurally it has been modeled as being truncated by a "potentially seismically active" décollement beneath the LA basin. But the geochemical data demonstrate that the NIFZ is a deep-seated fault connected with the mantle. Assuming that the helium migration is linked to the bulk fluid transport in the crust, we have used 1-D reactive mass transport theory to calculate a maximum inter-seismic Darcy flow rate of 2.2 cm yr-1 and intrinsic permeability of 160 microdarcys (1.6 x 10 -16 m2), vertically averaged across the crust. Based on thermal Peclet numbers and numerical models for the basin, we show that fault-focused fluid flow is too slow to elevate heat flow around the NIFZ. Although heat flow data are sparse, there generally doesn't appear to be any clear pattern of anomalous heat flow with the large strike-slip faults of southern California, suggesting that neither bulk fluid flow

  8. Fault Model Development for Fault Tolerant VLSI Design

    DTIC Science & Technology

    1988-05-01

    0 % .%. . BEIDGING FAULTS A bridging fault in a digital circuit connects two or more conducting paths of the circuit. The resistance...Melvin Breuer and Arthur Friedman, "Diagnosis and Reliable Design of Digital Systems", Computer Science Press, Inc., 1976. 4. [Chandramouli,1983] R...2138 AEDC LIBARY (TECH REPORTS FILE) MS-O0 ARNOLD AFS TN 37389-9998 USAG1 Attn: ASH-PCA-CRT Ft Huachuca AZ 85613-6000 DOT LIBRARY/iQA SECTION - ATTN

  9. Heterogeneity in the Fault Damage Zone: a Field Study on the Borrego Fault, B.C., Mexico

    NASA Astrophysics Data System (ADS)

    Ostermeijer, G.; Mitchell, T. M.; Dorsey, M. T.; Browning, J.; Rockwell, T. K.; Aben, F. M.; Fletcher, J. M.; Brantut, N.

    2017-12-01

    The nature and distribution of damage around faults, and its impacts on fault zone properties has been a hot topic of research over the past decade. Understanding the mechanisms that control the formation of off fault damage can shed light on the processes during the seismic cycle, and the nature of fault zone development. Recent published work has identified three broad zones of damage around most faults based on the type, intensity, and extent of fracturing; Tip, Wall, and Linking damage. Although these zones are able to adequately characterise the general distribution of damage, little has been done to identify the nature of damage heterogeneity within those zones, often simplifying the distribution to fit log-normal linear decay trends. Here, we attempt to characterise the distribution of fractures that make up the wall damage around seismogenic faults. To do so, we investigate an extensive two dimensional fracture network exposed on a river cut platform along the Borrego Fault, BC, Mexico, 5m wide, and extending 20m from the fault core into the damage zone. High resolution fracture mapping of the outcrop, covering scales ranging three orders of magnitude (cm to m), has allowed for detailed observations of the 2D damage distribution within the fault damage zone. Damage profiles were obtained along several 1D transects perpendicular to the fault and micro-damage was examined from thin-sections at various locations around the outcrop for comparison. Analysis of the resulting fracture network indicates heterogeneities in damage intensity at decimetre scales resulting from a patchy distribution of high and low intensity corridors and clusters. Such patchiness may contribute to inconsistencies in damage zone widths defined along 1D transects and the observed variability of fracture densities around decay trends. How this distribution develops with fault maturity and the scaling of heterogeneities above and below the observed range will likely play a key role in

  10. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019

  11. Strike-slip fault propagation and linkage via work optimization with application to the San Jacinto fault, California

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; McBeck, J.; Cooke, M. L.

    2013-12-01

    Over multiple earthquake cycles, strike-slip faults link to form through-going structures, as demonstrated by the continuous nature of the mature San Andreas fault system in California relative to the younger and more segmented San Jacinto fault system nearby. Despite its immaturity, the San Jacinto system accommodates between one third and one half of the slip along the boundary between the North American and Pacific plates. It therefore poses a significant seismic threat to southern California. Better understanding of how the San Jacinto system has evolved over geologic time and of current interactions between faults within the system is critical to assessing this seismic hazard accurately. Numerical models are well suited to simulating kilometer-scale processes, but models of fault system development are challenged by the multiple physical mechanisms involved. For example, laboratory experiments on brittle materials show that faults propagate and eventually join (hard-linkage) by both opening-mode and shear failure. In addition, faults interact prior to linkage through stress transfer (soft-linkage). The new algorithm GROW (GRowth by Optimization of Work) accounts for this complex array of behaviors by taking a global approach to fault propagation while adhering to the principals of linear elastic fracture mechanics. This makes GROW a powerful tool for studying fault interactions and fault system development over geologic time. In GROW, faults evolve to minimize the work (or energy) expended during deformation, thereby maximizing the mechanical efficiency of the entire system. Furthermore, the incorporation of both static and dynamic friction allows GROW models to capture fault slip and fault propagation in single earthquakes as well as over consecutive earthquake cycles. GROW models with idealized faults reveal that the initial fault spacing and the applied stress orientation control fault linkage propensity and linkage patterns. These models allow the gains in

  12. Active faulting on the Wallula fault zone within the Olympic-Wallowa lineament, Washington State, USA

    USGS Publications Warehouse

    Sherrod, Brian; Blakely, Richard J.; Lasher, John P.; Lamb, Andrew P.; Mahan, Shannon; Foit, Franklin F.; Barnett, Elizabeth

    2016-01-01

    The Wallula fault zone is an integral feature of the Olympic-Wallowa lineament, an ∼500-km-long topographic lineament oblique to the Cascadia plate boundary, extending from Vancouver Island, British Columbia, to Walla Walla, Washington. The structure and past earthquake activity of the Wallula fault zone are important because of nearby infrastructure, and also because the fault zone defines part of the Olympic-Wallowa lineament in south-central Washington and suggests that the Olympic-Wallowa lineament may have a structural origin. We used aeromagnetic and ground magnetic data to locate the trace of the Wallula fault zone in the subsurface and map a quarry exposure of the Wallula fault zone near Finley, Washington, to investigate past earthquakes along the fault. We mapped three main packages of rocks and unconsolidated sediments in an ∼10-m-high quarry exposure. Our mapping suggests at least three late Pleistocene earthquakes with surface rupture, and an episode of liquefaction in the Holocene along the Wallula fault zone. Faint striae on the master fault surface are subhorizontal and suggest reverse dextral oblique motion for these earthquakes, consistent with dextral offset on the Wallula fault zone inferred from offset aeromagnetic anomalies associated with ca. 8.5 Ma basalt dikes. Magnetic surveys show that the Wallula fault actually lies 350 m to the southwest of the trace shown on published maps, passes directly through deformed late Pleistocene or younger deposits exposed at Finley quarry, and extends uninterrupted over 120 km.

  13. Improving Multiple Fault Diagnosability using Possible Conflicts

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2012-01-01

    Multiple fault diagnosis is a difficult problem for dynamic systems. Due to fault masking, compensation, and relative time of fault occurrence, multiple faults can manifest in many different ways as observable fault signature sequences. This decreases diagnosability of multiple faults, and therefore leads to a loss in effectiveness of the fault isolation step. We develop a qualitative, event-based, multiple fault isolation framework, and derive several notions of multiple fault diagnosability. We show that using Possible Conflicts, a model decomposition technique that decouples faults from residuals, we can significantly improve the diagnosability of multiple faults compared to an approach using a single global model. We demonstrate these concepts and provide results using a multi-tank system as a case study.

  14. Fault structure and mechanics of the Hayward Fault, California from double-difference earthquake locations

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2002-01-01

    The relationship between small-magnitude seismicity and large-scale crustal faulting along the Hayward Fault, California, is investigated using a double-difference (DD) earthquake location algorithm. We used the DD method to determine high-resolution hypocenter locations of the seismicity that occurred between 1967 and 1998. The DD technique incorporates catalog travel time data and relative P and S wave arrival time measurements from waveform cross correlation to solve for the hypocentral separation between events. The relocated seismicity reveals a narrow, near-vertical fault zone at most locations. This zone follows the Hayward Fault along its northern half and then diverges from it to the east near San Leandro, forming the Mission trend. The relocated seismicity is consistent with the idea that slip from the Calaveras Fault is transferred over the Mission trend onto the northern Hayward Fault. The Mission trend is not clearly associated with any mapped active fault as it continues to the south and joins the Calaveras Fault at Calaveras Reservoir. In some locations, discrete structures adjacent to the main trace are seen, features that were previously hidden in the uncertainty of the network locations. The fine structure of the seismicity suggest that the fault surface on the northern Hayward Fault is curved or that the events occur on several substructures. Near San Leandro, where the more westerly striking trend of the Mission seismicity intersects with the surface trace of the (aseismic) southern Hayward Fault, the seismicity remains diffuse after relocation, with strong variation in focal mechanisms between adjacent events indicating a highly fractured zone of deformation. The seismicity is highly organized in space, especially on the northern Hayward Fault, where it forms horizontal, slip-parallel streaks of hypocenters of only a few tens of meters width, bounded by areas almost absent of seismic activity. During the interval from 1984 to 1998, when digital

  15. Fault Tree Analysis as a Planning and Management Tool: A Case Study

    ERIC Educational Resources Information Center

    Witkin, Belle Ruth

    1977-01-01

    Fault Tree Analysis is an operations research technique used to analyse the most probable modes of failure in a system, in order to redesign or monitor the system more closely in order to increase its likelihood of success. (Author)

  16. Modeling the evolution of the lower crust with laboratory derived rheological laws under an intraplate strike slip fault

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Sagiya, T.

    2015-12-01

    The earth's crust can be divided into the brittle upper crust and the ductile lower crust based on the deformation mechanism. Observations shows heterogeneities in the lower crust are associated with fault zones. One of the candidate mechanisms of strain concentration is shear heating in the lower crust, which is considered by theoretical studies for interplate faults [e.g. Thatcher & England 1998, Takeuchi & Fialko 2012]. On the other hand, almost no studies has been done for intraplate faults, which are generally much immature than interplate faults and characterized by their finite lengths and slow displacement rates. To understand the structural characteristics in the lower crust and its temporal evolution in a geological time scale, we conduct a 2-D numerical experiment on the intraplate strike slip fault. The lower crust is modeled as a 20km thick viscous layer overlain by rigid upper crust that has a steady relative motion across a vertical strike slip fault. Strain rate in the lower crust is assumed to be a sum of dislocation creep and diffusion creep components, each of which flows the experimental flow laws. The geothermal gradient is assumed to be 25K/km. We have tested different total velocity on the model. For intraplate fault, the total velocity is less than 1mm/yr, and for comparison, we use 30mm/yr for interplate faults. Results show that at a low slip rate condition, dislocation creep dominates in the shear zone near the intraplate fault's deeper extension while diffusion creep dominates outside the shear zone. This result is different from the case of interplate faults, where dislocation creep dominates the whole region. Because of the power law effect of dislocation creep, the effective viscosity in the shear zone under intraplate faults is much higher than that under the interplate fault, therefore, shear zone under intraplate faults will have a much higher viscosity and lower shear stress than the intraplate fault. Viscosity contract between

  17. Solar system fault detection

    DOEpatents

    Farrington, R.B.; Pruett, J.C. Jr.

    1984-05-14

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  18. Solar system fault detection

    DOEpatents

    Farrington, Robert B.; Pruett, Jr., James C.

    1986-01-01

    A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

  19. Structural evolution of fault zones in sandstone by multiple deformation mechanisms: Moab fault, southeast Utah

    USGS Publications Warehouse

    Davatzes, N.C.; Eichhubl, P.; Aydin, A.

    2005-01-01

    Faults in sandstone are frequently composed of two classes of structures: (1) deformation bands and (2) joints and sheared joints. Whereas the former structures are associated with cataclastic deformation, the latter ones represent brittle fracturing, fragmentation, and brecciation. We investigated the distribution of these structures, their formation, and the underlying mechanical controls for their occurrence along the Moab normal fault in southeastern Utah through the use of structural mapping and numerical elastic boundary element modeling. We found that deformation bands occur everywhere along the fault, but with increased density in contractional relays. Joints and sheared joints only occur at intersections and extensional relays. In all locations , joints consistently overprint deformation bands. Localization of joints and sheared joints in extensional relays suggests that their distribution is controlled by local variations in stress state that are due to mechanical interaction between the fault segments. This interpretation is consistent with elastic boundary element models that predict a local reduction in mean stress and least compressive principal stress at intersections and extensional relays. The transition from deformation band to joint formation along these sections of the fault system likely resulted from the combined effects of changes in remote tectonic loading, burial depth, fluid pressure, and rock properties. In the case of the Moab fault, we conclude that the structural heterogeneity in the fault zone is systematically related to the geometric evolution of the fault, the local state of stress associated with fault slip , and the remote loading history. Because the type and distribution of structures affect fault permeability and strength, our results predict systematic variations in these parameters with fault evolution. ?? 2004 Geological Society of America.

  20. Development of direct dating methods of fault gouges: Deep drilling into Nojima Fault, Japan

    NASA Astrophysics Data System (ADS)

    Miyawaki, M.; Uchida, J. I.; Satsukawa, T.

    2017-12-01

    It is crucial to develop a direct dating method of fault gouges for the assessment of recent fault activity in terms of site evaluation for nuclear power plants. This method would be useful in regions without Late Pleistocene overlying sediments. In order to estimate the age of the latest fault slip event, it is necessary to use fault gouges which have experienced high frictional heating sufficient for age resetting. It is said that frictional heating is higher in deeper depths, because frictional heating generated by fault movement is determined depending on the shear stress. Therefore, we should determine the reliable depth of age resetting, as it is likely that fault gouges from the ground surface have been dated to be older than the actual age of the latest fault movement due to incomplete resetting. In this project, we target the Nojima fault which triggered the 1995 Kobe earthquake in Japan. Samples are collected from various depths (300-1,500m) by trenching and drilling to investigate age resetting conditions and depth using several methods including electron spin resonance (ESR) and optical stimulated luminescence (OSL), which are applicable to ages later than the Late Pleistocene. The preliminary results by the ESR method show approx. 1.1 Ma1) at the ground surface and 0.15-0.28 Ma2) at 388 m depth, respectively. These results indicate that samples from deeper depths preserve a younger age. In contrast, the OSL method dated approx. 2,200 yr1) at the ground surface. Although further consideration is still needed as there is a large margin of error, this result indicates that the age resetting depth of OSL is relatively shallow due to the high thermosensitivity of OSL compare to ESR. In the future, we plan to carry out further investigation for dating fault gouges from various depths up to approx. 1,500 m to verify the use of these direct dating methods.1) Kyoto University, 2017. FY27 Commissioned for the disaster presentation on nuclear facilities (Drilling

  1. The Najd Fault System of Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Stüwe, Kurt; Kadi, Khalid; Abu-Alam, Tamer; Hassan, Mahmoud

    2014-05-01

    The Najd Fault System of the Arabian-Nubian Shield is considered to be the largest Proterozoic Shear zone system on Earth. The shear zone was active during the late stages of the Pan African evolution and is known to be responsible for the exhumation of fragments of juvenile Proterozoic continental crust that form a series of basement domes across the shield areas of Egypt and Saudi Arabia. A three year research project funded by the Austrian Science Fund (FWF) and supported by the Saudi Geological Survey (SGS) has focused on structural mapping, petrology and geochronology of the shear zone system in order to constrain age and mechanisms of exhumation of the domes - with focus on the Saudi Arabian side of the Red Sea. We recognise important differences in comparison with the basement domes in the Eastern desert of Egypt. In particular, high grade metamorphic rocks are not exclusively confined to basement domes surrounded by shear zones, but also occur within shear zones themselves. Moreover, we recognise both exhumation in extensional and in transpressive regimes to be responsible for exhumation of high grade metamorphic rocks in different parts of the shield. We suggest that these apparent structural differences between different sub-regions of the shield largely reflect different timing of activity of various branches of the Najd Fault System. In order to tackle the ill-resolved timing of the Najd Fault System, zircon geochronology is performed on intrusive rocks with different cross cutting relationships to the shear zone. We are able to constrain an age between 580 Ma and 605 Ma for one of the major branches of the shear zone, namely the Ajjaj shear zone. In our contribution we present a strain map for the shield as well as early geochronological data for selected shear zone branches.

  2. Evidence for Seismogenic Hydrogen Gas, a Potential Microbial Energy Source on Earth and Mars

    NASA Astrophysics Data System (ADS)

    McMahon, Sean; Parnell, John; Blamey, Nigel J. F.

    2016-09-01

    The oxidation of molecular hydrogen (H2) is thought to be a major source of metabolic energy for life in the deep subsurface on Earth, and it could likewise support any extant biosphere on Mars, where stable habitable environments are probably limited to the subsurface. Faulting and fracturing may stimulate the supply of H2 from several sources. We report the H2 content of fluids present in terrestrial rocks formed by brittle fracturing on fault planes (pseudotachylites and cataclasites), along with protolith control samples. The fluids are dominated by water and include H2 at abundances sufficient to support hydrogenotrophic microorganisms, with strong H2 enrichments in the pseudotachylites compared to the controls. Weaker and less consistent H2 enrichments are observed in the cataclasites, which represent less intense seismic friction than the pseudotachylites. The enrichments agree quantitatively with previous experimental measurements of frictionally driven H2 formation during rock fracturing. We find that conservative estimates of current martian global seismicity predict episodic H2 generation by Marsquakes in quantities useful to hydrogenotrophs over a range of scales and recurrence times. On both Earth and Mars, secondary release of H2 may also accompany the breakdown of ancient fault rocks, which are particularly abundant in the pervasively fractured martian crust. This study strengthens the case for the astrobiological investigation of ancient martian fracture systems.

  3. STS-4 earth observations from space

    NASA Technical Reports Server (NTRS)

    1982-01-01

    STS-4 earth observations from space. Views include both Florida coasts, with Cape Canaveral visible at the center of the frame. The photo was exposed through the aft window on the flight deck of the Columbia. The vertical tail and both orbital maneuvering systems (OMS) pods are visible in the foreground. Other features on the Earth which are visible include Tampa Bay and several lakes, including Apopka, Tohopekaliga, East Tahopekaliga, Harris, Cypress and a number of small reservoirs (33223); This is a north-easterly looking view toward California's Pacific Coast. The coastal area covered includes San Diego northward to Pismo Beach. Los Angeles is near center. The arc of the Temblor-Tehachapi-Sierra Nevada surrounds the San Joaquin Valley at left. The Mojave desert lies between the San Andres and Garlock Faults (33224); Mexico's Baja California and Sonora state are visible in the STS-4 frame. The islands of Angel de la Guardia and Tiburon stand out above and right of center. Low clouds

  4. Late Quaternary Faulting along the San Juan de los Planes Fault Zone, Baja California Sur, Mexico

    NASA Astrophysics Data System (ADS)

    Busch, M. M.; Coyan, J. A.; Arrowsmith, J.; Maloney, S. J.; Gutierrez, G.; Umhoefer, P. J.

    2007-12-01

    As a result of continued distributed deformation in the Gulf Extensional Province along an oblique-divergent plate margin, active normal faulting is well manifest in southeastern Baja California. By characterizing normal-fault related deformation along the San Juan de los Planes fault zone (SJPFZ) southwest of La Paz, Baja California Sur we contribute to understanding the patterns and rates of faulting along the southwest gulf-margin fault system. The geometry, history, and rate of faulting provide constraints on the relative significance of gulf-margin deformation as compared to axial system deformation. The SJPFZ is a major north-trending structure in the southern Baja margin along which we focused our field efforts. These investigations included: a detailed strip map of the active fault zone, including delineation of active scarp traces and geomorphic surfaces on the hanging wall and footwall; fault scarp profiles; analysis of bedrock structures to better understand how the pattern and rate of strain varied during the development of this fault zone; and a gravity survey across the San Juan de los Planes basin to determine basin geometry and fault behavior. The map covers a N-S swath from the Gulf of California in the north to San Antonio in the south, an area ~45km long and ~1-4km wide. Bedrock along the SJPFZ varies from Cretaceous Las Cruces Granite in the north to Cretaceous Buena Mujer Tonalite in the south and is scarred by shear zones and brittle faults. The active scarp-forming fault juxtaposes bedrock in the footwall against Late Quaternary sandstone-conglomerate. This ~20m wide zone is highly fractured bedrock infused with carbonate. The northern ~12km of the SJPFZ, trending 200°, preserves discontinuous scarps 1-2km long and 1-3m high in Quaternary units. The scarps are separated by stretches of bedrock embayed by hundreds of meters-wide tongues of Quaternary sandstone-conglomerate, implying low Quaternary slip rate. Further south, ~2 km north of the

  5. Systematic survey of high-resolution b value imaging along Californian faults: Inference on asperities

    NASA Astrophysics Data System (ADS)

    Tormann, T.; Wiemer, S.; Mignan, A.

    2014-03-01

    Understanding and forecasting earthquake occurrences is presumably linked to understanding the stress distribution in the Earth's crust. This cannot be measured instrumentally with useful coverage. However, the size distribution of earthquakes, quantified by the Gutenberg-Richter b value, is possibly a proxy to differential stress conditions and could therewith act as a crude stress-meter wherever seismicity is observed. In this study, we improve the methodology of b value imaging for application to a high-resolution 3-D analysis of a complex fault network. In particular, we develop a distance-dependent sampling algorithm and introduce a linearity measure to restrict our output to those regions where the magnitude distribution strictly follows a power law. We assess the catalog completeness along the fault traces using the Bayesian Magnitude of Completeness method and systematically image b values for 243 major fault segments in California. We identify and report b value structures, revisiting previously published features, e.g., the Parkfield asperity, and documenting additional anomalies, e.g., along the San Andreas and Northridge faults. Combining local b values with local earthquake productivity rates, we derive probability maps for the annual potential of one or more M6 events as indicated by the microseismicity of the last three decades. We present a physical concept of how different stressing conditions along a fault surface may lead to b value variation and explain nonlinear frequency-magnitude distributions. Detailed spatial b value information and its physical interpretation can advance our understanding of earthquake occurrence and ideally lead to improved forecasting ability.

  6. Graphical fault tree analysis for fatal falls in the construction industry.

    PubMed

    Chi, Chia-Fen; Lin, Syuan-Zih; Dewi, Ratna Sari

    2014-11-01

    The current study applied a fault tree analysis to represent the causal relationships among events and causes that contributed to fatal falls in the construction industry. Four hundred and eleven work-related fatalities in the Taiwanese construction industry were analyzed in terms of age, gender, experience, falling site, falling height, company size, and the causes for each fatality. Given that most fatal accidents involve multiple events, the current study coded up to a maximum of three causes for each fall fatality. After the Boolean algebra and minimal cut set analyses, accident causes associated with each falling site can be presented as a fault tree to provide an overview of the basic causes, which could trigger fall fatalities in the construction industry. Graphical icons were designed for each falling site along with the associated accident causes to illustrate the fault tree in a graphical manner. A graphical fault tree can improve inter-disciplinary discussion of risk management and the communication of accident causation to first line supervisors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Hayward Fault, California Interferogram

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image of California's Hayward fault is an interferogram created using a pair of images taken by Synthetic Aperture Radar(SAR) combined to measure changes in the surface that may have occurred between the time the two images were taken.

    The images were collected by the European Space Agency's Remote Sensing satellites ERS-1 and ERS-2 in June 1992 and September 1997 over the central San Francisco Bay in California.

    The radar image data are shown as a gray-scale image, with the interferometric measurements that show the changes rendered in color. Only the urbanized area could be mapped with these data. The color changes from orange tones to blue tones across the Hayward fault (marked by a thin red line) show about 2-3centimeters (0.8-1.1 inches) of gradual displacement or movement of the southwest side of the fault. The block west of the fault moved horizontally toward the northwest during the 63 months between the acquisition of the two SAR images. This fault movement is called a seismic creep because the fault moved slowly without generating an earthquake.

    Scientists are using the SAR interferometry along with other data collected on the ground to monitor this fault motion in an attempt to estimate the probability of earthquake on the Hayward fault, which last had a major earthquake of magnitude 7 in 1868. This analysis indicates that the northern part of the Hayward fault is creeping all the way from the surface to a depth of 12 kilometers (7.5 miles). This suggests that the potential for a large earthquake on the northern Hayward fault might be less than previously thought. The blue area to the west (lower left) of the fault near the center of the image seemed to move upward relative to the yellow and orange areas nearby by about 2 centimeters (0.8 inches). The cause of this apparent motion is not yet confirmed, but the rise of groundwater levels during the time between the images may have caused the reversal of a small portion of the subsidence that

  8. Modeling right-lateral offset of a Late Pleistocene terrace riser along the Polaris fault using ground based LiDAR imagery

    NASA Astrophysics Data System (ADS)

    Howle, J. F.; Bawden, G. W.; Hunter, L. E.; Rose, R. S.

    2009-12-01

    High resolution (centimeter level) three-dimensional point-cloud imagery of offset glacial outwash deposits were collected by using ground based tripod LiDAR (T-LiDAR) to characterize the cumulative fault slip across the recently identified Polaris fault (Hunter et al., 2009) near Truckee, California. The type-section site for the Polaris fault is located 6.5 km east of Truckee where progressive right-lateral displacement of middle to late Pleistocene deposits is evident. Glacial outwash deposits, aggraded during the Tioga glaciation, form a flat lying ‘fill’ terrace on both the north and south sides of the modern Truckee River. During the Tioga deglaciation melt water incised into the terrace producing fluvial scarps or terrace risers (Birkeland, 1964). Subsequently, the terrace risers on both banks have been right-laterally offset by the Polaris fault. By using T-LiDAR on an elevated tripod (4.25 m high), we collected 3D high-resolution (thousands of points per square meter; ± 4 mm) point-cloud imagery of the offset terrace risers. Vegetation was removed from the data using commercial software, and large protruding boulders were manually deleted to generate a bare-earth point-cloud dataset with an average data density of over 240 points per square meter. From the bare-earth point cloud we mathematically reconstructed a pristine terrace/scarp morphology on both sides of the fault, defined coupled sets of piercing points, and extracted a corresponding displacement vector. First, the Polaris fault was approximated as a vertical plane that bisects the offset terrace risers, as well as bisecting linear swales and tectonic depressions in the outwash terrace. Then, piercing points to the vertical fault plane were constructed from the geometry of the geomorphic elements on either side of the fault. On each side of the fault, the best-fit modeled outwash plane is projected laterally and the best-fit modeled terrace riser projected upward to a virtual intersection in

  9. The Denali Earth Science Education Project

    NASA Astrophysics Data System (ADS)

    Hansen, R. A.; Stachnik, J. C.; Roush, J. J.; Siemann, K.; Nixon, I.

    2004-12-01

    In partnership with Denali National Park and Preserve and the Denali Institute, the Alaska Earthquake Information Center (AEIC) will capitalize upon an extraordinary opportunity to raise public interest in the earth sciences. A coincidence of events has made this an ideal time for outreach to raise awareness of the solid earth processes that affect all of our lives. On November 3, 2002, a M 7.9 earthquake occurred on the Denali Fault in central Alaska, raising public consciousness of seismic activity in this state to a level unmatched since the M 9.2 "Good Friday" earthquake of 1964. Shortly after the M 7.9 event, a new public facility for scientific research and education in Alaska's national parks, the Murie Science and Learning Center, was constructed at the entrance to Denali National Park and Preserve only 43 miles from the epicenter of the Denali Fault Earthquake. The AEIC and its partners believe that these events can be combined to form a synergy for the creation of unprecedented opportunities for learning about solid earth geophysics among all segments of the public. This cooperative project will undertake the planning and development of education outreach mechanisms and products for the Murie Science and Learning Center that will serve to educate Alaska's residents and visitors about seismology, tectonics, crustal deformation, and volcanism. Through partnerships with Denali National Park and Preserve, this cooperative project will include the Denali Institute (a non-profit organization that assists the National Park Service in operating the Murie Science and Learning Center) and Alaska's Denali Borough Public School District. The AEIC will also draw upon the resources of long standing state partners; the Alaska Division of Geological & Geophysical Surveys and the Alaska Division of Homeland Security and Emergency Services. The objectives of this project are to increase public awareness and understanding of the solid earth processes that affect life in

  10. Identification of active fault using analysis of derivatives with vertical second based on gravity anomaly data (Case study: Seulimeum fault in Sumatera fault system)

    NASA Astrophysics Data System (ADS)

    Hududillah, Teuku Hafid; Simanjuntak, Andrean V. H.; Husni, Muhammad

    2017-07-01

    Gravity is a non-destructive geophysical technique that has numerous application in engineering and environmental field like locating a fault zone. The purpose of this study is to spot the Seulimeum fault system in Iejue, Aceh Besar (Indonesia) by using a gravity technique and correlate the result with geologic map and conjointly to grasp a trend pattern of fault system. An estimation of subsurface geological structure of Seulimeum fault has been done by using gravity field anomaly data. Gravity anomaly data which used in this study is from Topex that is processed up to Free Air Correction. The step in the Next data processing is applying Bouger correction and Terrin Correction to obtain complete Bouger anomaly that is topographically dependent. Subsurface modeling is done using the Gav2DC for windows software. The result showed a low residual gravity value at a north half compared to south a part of study space that indicated a pattern of fault zone. Gravity residual was successfully correlate with the geologic map that show the existence of the Seulimeum fault in this study space. The study of earthquake records can be used for differentiating the active and non active fault elements, this gives an indication that the delineated fault elements are active.

  11. The Surface faulting produced by the 30 October 2016 Mw 6.5 Central Italy earthquake: the Open EMERGEO Working Group experience

    NASA Astrophysics Data System (ADS)

    Pantosti, Daniela

    2017-04-01

    The October 30, 2016 (06:40 UTC) Mw 6.5 earthquake occurred about 28 km NW of Amatrice village as the result of upper crust normal faulting on a nearly 30 km-long, NW-SE oriented, SW dipping fault system in the Central Apennines. This earthquake is the strongest Italian seismic event since the 1980 Mw 6.9 Irpinia earthquake. The Mw 6.5 event was the largest shock of a seismic sequence, which began on August 24 with a Mw 6.0 earthquake and also included a Mw 5.9 earthquake on October 26, about 9 and 35 km NW of Amatrice village, respectively. Field surveys of coseismic geological effects at the surface started within hours of the mainshock and were carried out by several national and international teams of earth scientists (about 120 people) from different research institutions and universities coordinated by the EMERGEO Working Group of the Istituto Nazionale di Geofisica e Vulcanologia. This collaborative effort was focused on the detailed recognition and mapping of: 1) the total extent of the October 30 coseismic surface ruptures, 2) their geometric and kinematic characteristics, 3) the coseismic displacement distribution along the activated fault system, including subsidiary and antithetic ruptures. The huge amount of collected data (more than 8000 observation points of several types of coseismic effects at the surface) were stored, managed and shared using a specifically designed spreadsheet to populate a georeferenced database. More comprehensive mapping of the details and extent of surface rupture was facilitated by Structure-from-Motion photogrammetry surveys by means of several helicopter flights. An almost continuous alignment of ruptures about 30 km long, N150/160 striking, mainly SW side down was observed along the already known active Mt. Vettore - Mt. Bove fault system. The mapped ruptures occasionally overlapped those of the August 24 Mw 6.0 and October 26 Mw 5.9 shocks. The coincidence between the observed surface ruptures and the trace of active

  12. Fault geometric complexity and how it may cause temporal slip-rate variation within an interacting fault system

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; Arrowsmith, Ramon

    2010-05-01

    Slip-rates along individual faults may differ as a function of measurement time scale. Short-term slip-rates may be higher than the long term rate and vice versa. For example, vertical slip-rates along the Wasatch Fault, Utah are 1.7+/-0.5 mm/yr since 6ka, <0.6 mm/yr since 130ka, and 0.5-0.7 mm/yr since 10Ma (Friedrich et al., 2003). Following conventional earthquake recurrence models like the characteristic earthquake model, this observation implies that the driving strain accumulation rates may have changed over the respective time scales as well. While potential explanations for such slip-rate variations may be found for example in the reorganization of plate tectonic motion or mantle flow dynamics, causing changes in the crustal velocity field over long spatial wavelengths, no single geophysical explanation exists. Temporal changes in earthquake rate (i.e., event clustering) due to elastic interactions within a complex fault system may present an alternative explanation that requires neither variations in strain accumulation rate or nor changes in fault constitutive behavior for frictional sliding. In the presented study, we explore this scenario and investigate how fault geometric complexity, fault segmentation and fault (segment) interaction affect the seismic behavior and slip-rate along individual faults while keeping tectonic stressing-rate and frictional behavior constant in time. For that, we used FIMozFric--a physics-based numerical earthquake simulator, based on Okada's (1992) formulations for internal displacements and strains due to shear and tensile faults in a half-space. Faults are divided into a large number of equal-sized fault patches which communicate via elastic interaction, allowing implementation of geometrically complex, non-planar faults. Each patch has assigned a static and dynamic friction coefficient. The difference between those values is a function of depth--corresponding to the temperature-dependence of velocity-weakening that is

  13. Interplanetary Radiation and Fault Tolerant Mini-Star Tracker System

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Paceley, Pete

    2015-01-01

    The Charles Stark Draper Laboratory, Inc. is partnering with the NASA Marshall Space Flight Center (MSFC) Engineering Directorate's Avionics Design Division and Flight Mechanics & Analysis Division to develop and test a prototype small, low-weight, low-power, radiation-hardened, fault-tolerant mini-star tracker (fig. 1). The project is expected to enable Draper Laboratory and its small business partner, L-1 Standards and Technologies, Inc., to develop a new guidance, navigation, and control sensor product for the growing small sat technology market. The project also addresses MSFC's need for sophisticated small sat technologies to support a variety of science missions in Earth orbit and beyond. The prototype star tracker will be tested on the night sky on MSFC's Automated Lunar and Meteor Observatory (ALAMO) telescope. The specific goal of the project is to address the need for a compact, low size, weight, and power, yet radiation hardened and fault tolerant star tracker system that can be used as a stand-alone attitude determination system or incorporated into a complete attitude determination and control system for emerging interplanetary and operational CubeSat and small sat missions.

  14. The combined EarthScope data set at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Trabant, C.; Sharer, G.; Benson, R.; Ahern, T.

    2007-12-01

    The IRIS Data Management Center (DMC) is the perpetual archive and access point for an ever-increasing variety of geophysical data in terms of volume, geographic distribution and scientific value. A particular highlight is the combined data set produced by the EarthScope project. The DMC archives data from each of the primary components: USArray, the Plate Boundary Observatory (PBO) & the San Andreas Fault Observatory at Depth (SAFOD). Growing at over 4.6 gigabytes per day, the USArray data set currently totals approximately 5 terabytes. Composed of four separate sub-components: the Permanent, Transportable, Flexible and Magnetotelluric Arrays, the USArray data set provides a multi-scale view of the western United States at present and the conterminous United States when it is completed. The primary data from USArray are in the form of broadband and short-period seismic recordings and magnetotelluric measurements. Complementing the data from USArray are the short- period, borehole seismic data and borehole and laser strain data from PBO. The DMC also archives the high- resolution seismic data from instruments in the SAFOD main and pilot drill holes. The SAFOD seismic data is available in two forms: lower-rate monitoring channels sampled at 250 hertz and full resolution channels varying between 1 and 4 kilohertz. Beyond data collection and archive management the DMC performs value-added functions. All data arriving at the DMC as real-time data streams are processed by QUACK, an automated Quality Control (QC) system. All the measurements made by this system are stored in a database and made available to data contributors and users via a web interface including customized report generation. In addition to the automated QC measurements, quality control is performed on USArray data at the DMC by a team of analysts. The primary functions of the analysts are to routinely report data quality assessment to the respective network operators and log serious, unfixable data

  15. Constraints on Slow Slip from Landsliding and Faulting

    NASA Astrophysics Data System (ADS)

    Delbridge, Brent Gregory

    The discovery of slow-slip has radically changed the way we understand the relative movement of Earth's tectonic plates and the accumulation of stress in fault zones that fail in large earthquakes. Prior to the discovery of slow-slip, faults were thought to relieve stress either through continuous aseismic sliding, as is the case for continental creeping faults, or in near instantaneous failure. Aseismic deformation reflects fault slip that is slow enough that both inertial forces and seismic radiation are negligible. The durations of observed aseismic slip events range from days to years, with displacements of up to tens of centimeters. These events are not unique to a specific depth range and occur on faults in a variety of tectonic settings. This aseismic slip can sometimes also trigger more rapid slip somewhere else on the fault, such as small embedded asperities. This is thought to be the mechanism generating observed Low Frequency Earthquakes (LFEs) and small repeating earthquakes. I have preformed a series of studies to better understanding the nature of tectonic faulting which are compiled here. The first is entitled "3D surface deformation derived from airborne interferometric UAVSAR: Application to the Slumgullion Landslide", and was originally published in the Journal of Geophysical Research in 2016. In order to understand how landslides respond to environmental forcing, we quantify how the hydro-mechanical forces controlling the Slumgullion Landslide express themselves kinematically in response to the infiltration of seasonal snowmelt. The well-studied Slumgullion Landslide, which is 3.9 km long and moves persistently at rates up to 2 cm/day is an ideal natural laboratory due to its large spatial extent and rapid deformation rates. The lateral boundaries of the landslide consist of strike-slip fault features, which over time have built up large flank ridges. The second study compiled here is entitled "Temporal variation of intermediate-depth earthquakes

  16. Large-scale splay faults on a strike-slip fault system: The Yakima Folds, Washington State

    USGS Publications Warehouse

    Pratt, Thomas L.

    2012-01-01

    The Yakima Folds (YF) comprise anticlines above reverse faults cutting flows of the Miocene Columbia River Basalt Group of central Washington State. The YF are bisected by the ~1100-km-long Olympic-Wallowa Lineament (OWL), which is an alignment of topographic features including known faults. There is considerable debate about the origin and earthquake potential of both the YF and OWL, which lie near six major dams and a large nuclear waste storage site. Here I show that the trends of the faults forming the YF relative to the OWL match remarkably well the trends of the principal stress directions at the end of a vertical strike-slip fault. This comparison and the termination of some YF against the OWL are consistent with the YF initially forming as splay faults caused by an along-strike decrease in the amount of strike-slip on the OWL. The hypothesis is that the YF faults initially developed as splay faults in the early to mid Miocene under NNW-oriented principal compressive stress, but the anticlines subsequently grew with thrust motion after the principal compressive stress direction rotated to N-S or NNE after the mid-Miocene. A seismic profile across one of the YF anticlines shows folding at about 7 km depth, indicating deformation of sub-basalt strata. The seismic profile and the hypothesized relationship between the YF and the OWL suggest that the structures are connected in the middle or lower crust, and that the faults forming the YF are large-scale splay faults associated with a major strike-slip fault system.

  17. A deep hydrothermal fault zone in the lower oceanic crust, Samail ophiolite Oman

    NASA Astrophysics Data System (ADS)

    Zihlmann, B.; Mueller, S.; Koepke, J.; Teagle, D. A. H.

    2017-12-01

    Hydrothermal circulation is a key process for the exchange of chemical elements between the oceans and the solid Earth and for the extraction of heat from newly accreted crust at mid-ocean ridges. However, due to a dearth of samples from intact oceanic crust, or continuous samples from ophiolites, there remain major short comings in our understanding of hydrothermal circulation in the oceanic crust, especially in the deeper parts. In particular, it is unknown whether fluid recharge and discharge occurs pervasively or if it is mainly channeled within discrete zones such as faults. Here, we present a description of a hydrothermal fault zone that crops out in Wadi Gideah in the layered gabbro section of the Samail ophiolite of Oman. Field observations reveal a one meter thick chlorite-epidote normal fault with disseminated pyrite and chalcopyrite and heavily altered gabbro clasts at its core. In both, the hanging and the footwall the gabbro is altered and abundantly veined with amphibole, epidote, prehnite and zeolite. Whole rock mass balance calculations show enrichments in Fe, Mn, Sc, V, Co, Cu, Rb, Zr, Nb, Th and U and depletions of Si, Ca, Na, Cr, Zn, Sr, Ba and Pb concentrations in the fault rock compared to fresh layered gabbros. Gabbro clasts within the fault zone as well as altered rock from the hanging wall show enrichments in Na, Sc, V, Co, Rb, Zr, Nb and depletion of Cr, Ni, Cu, Zn, Sr and Pb. Strontium isotope whole rock data of the fault rock yield 87Sr/86Sr ratios of 0.7046, which is considerably more radiogenic than fresh layered gabbro from this locality (87Sr/86Sr = 0.7030 - 0.7034), and similar to black smoker hydrothermal signatures based on epidote, measured elsewhere in the ophiolite. Altered gabbro clasts within the fault zone show similar values with 87Sr/86Sr ratios of 0.7045 - 0.7050, whereas hanging wall and foot wall display values only slightly more radiogenic than fresh layered gabbro.The secondary mineral assemblages and strontium isotope

  18. Rapid recovery from transient faults in the fault-tolerant processor with fault-tolerant shared memory

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Butler, Bryan P.

    1990-01-01

    The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration.

  19. Llnking the EarthScope Data Virtual Catalog to the GEON Portal

    NASA Astrophysics Data System (ADS)

    Lin, K.; Memon, A.; Baru, C.

    2008-12-01

    The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.

  20. Reconstruction of the Earthquake History of Limestone Fault Scarps in Knidos Fault Zone Using in-situ Chlorine-36 Exposure Dating and "R" Programming Language

    NASA Astrophysics Data System (ADS)

    Sahin, Sefa; Yildirim, Cengiz; Akif Sarikaya, Mehmet; Tuysuz, Okan; Genc, S. Can; Ersen Aksoy, Murat; Ertekin Doksanalti, Mustafa

    2016-04-01

    Cosmogenic surface exposure dating is based on the production of rare nuclides in exposed rocks, which interact with cosmic rays. Through modelling of measured 36Cl concentrations, we might obtain information of the history of the earthquake activity. Yet, there are several factors which may impact production of rare nuclides such as geometry of the fault, topography, geographic location of the study area, temporal variations of the Earth's magnetic field, self-cover and denudation rate on the scarp. Recently developed models provides a method to infer timing of earthquakes and slip rates on limited scales by taking into account these parameters. Our study area, the Knidos Fault Zone, is located on the Datça Peninsula in Southwestern Anatolia and contains several normal fault scarps formed within the limestone, which are appropriate to generate cosmogenic chlorine-36 (36Cl) dating models. Since it has a well-preserved scarp, we have focused on the Mezarlık Segment of the fault zone, which has an average length of 300 m and height 12-15 m. 128 continuous samples from top to bottom of the fault scarp were collected to carry out analysis of cosmic 36Cl isotopes concentrations. The main purpose of this study is to analyze factors affecting the production rates and amount of cosmogenic 36Cl nuclides concentration. Concentration of Cl36 isotopes are measured by AMS laboratories. Through the local production rates and concentration of the cosmic isotopes, we can calculate exposure ages of the samples. Recent research elucidated each step of the application of this method by the Matlab programming language (e.g. Schlagenhauf et al., 2010). It is vitally helpful to generate models of Quaternary activity of the normal faults. We, however, wanted to build a user-friendly program through an open source programing language "R" (GNU Project) that might be able to help those without knowledge of complex math programming, making calculations as easy and understandable as